This policy brief originally appeared in the Newsletter of the American Sociological Association’s Sociology of Development Section in the Fall of 2015.

Over the past two decades, evaluators in international development have emphasized the value of mixing quantitative and qualitative methods to understand the impact of donor-funded projects. While experimental methods remain the gold standard, there is a strong push in both academic and policy circles towards high-caliber qualitative evaluations. However, like all trends in international development, the conversation around mixed methods is far more sophisticated than the practice, since a vast majority of mixed methods evaluations still consider quantitative data and methods as central to determining impact. Barring a few exceptions, evaluators deem qualitative data as illustrative at best, integrate qualitative methods as an afterthought and rarely draw on sociological theory to assess impact. It is no wonder that impact evaluations of this nature are often considered subjective, intuitive and anecdotal.

As a development sociologist in the World Bank, I have had the opportunity to be part of a qualitative evaluation of a Bank funded project in India. Five ethnographers, including myself, tracked ten villages – five treatment and five control – over three years. This qualitative piece is part of a larger multi-method evaluation conducted by the Social Observatory – a multi-disciplinary unit in the World Bank which embeds researchers / evaluators from different disciplines in projects and encourages learning between them and policy makers. As field coordinator in this unit, I conducted fieldwork in rural Bihar trying to understand the impact of a participatory development project called Jeevika.

Jeevika is an embodiment of the new architecture of development assistance – it attempts to alleviate poverty through bottom-up participation and institution building. More concretely, it forms networks of women’s Self-Help Groups, and mobilizes those networks to improve women’s quality of life in many ways including expanding credit and savings, generating incomes, developing skills, reducing malnutrition, and alleviating domestic violence. The aim of our evaluation is to understand the impact of Jeevika on women’s empowerment, and unpack its myriad forms and trajectories. Drawing on several sociological theories, concepts, and methodologies helped us better understand impact in treatment villages by comparing it to villages where the project was absent. What follows are some overarching conclusions from our experience in the field.

The success of the project rests heavily on its frontline workers

            The frontline workers of a poverty alleviation project, in our case the “community coordinators,” are uniquely positioned as translators between development policy and practice. Scrutinizing their dispositions, skills, motivations, and the meanings they attach to their practices as they unfold is critical for any evaluation, and participant observation as a mode of inquiry, wherein the evaluator embeds herself within the project, is well equipped to do so. We used participant observation to observe day-to-day interactions between coordinators and Jeevika women both within and outside the formal space of the project, and found that the strength of the coordinator lies in openly addressing the cultural dimensions of inequality. For instance, their role went far beyond simply mobilizing women and increasing project membership; in order to sustain participation they also focused on changing otherwise rigid norms around mobility and domestic violence in the villages.

It takes a village to empower a woman

            A second conclusion is that programs that target women individually are not as effective as programs that target their entire context. To give an example, we found that one of the steps that the frontline workers took when entering a village was to carefully build alliances with key stakeholders. Rather than getting buy-in in the beginning alone, they focused on enrolling supporters throughout the project’s life cycle, which helped sustain women’s participation in the project. The qualitative study allowed us to understand how these alliances are built in a large-scale project such as Jeevika. While the quantitative evaluation was able to capture individual level impacts on women – like greater income, higher social capital or greater political participation – it misses this central facet of the story, the mechanisms that result in these impacts.

Moments of failure are as crucial as moments of ‘success’

            In a project’s monitoring system, failure is often assumed when the project or actor’s intentions were not achieved. For instance, many women attempted to start a business, or reduce their informal debt burden, or fight elections as a result of Jeevika, but ultimately failed to earn an income or cut out the moneylender or win the election. However, failure of this nature does not invalidate the act itself. On the contrary, by observing the everyday life of the project, we found that the attempt itself is in fact ‘successful’ in that it brings into the realm of possibility an entrepreneurial woman or a female politician and gives other women and children the ‘capacity to aspire’. In addition, it is only rehearsal and repetition of these ‘failures’ that lay grounds for success – a subtle treatment effect that is hard to capture in the quantitative survey.

           Overall, participatory development projects are infamous for generating unpredictable and vastly different trajectories of change. But the nuts and bolts of how and why these trajectories differ (or in our case, why empowerment is harder in some villages than others) needs careful qualitative fieldwork, so that the learning can be integrated into the project cycle mid-stream. The insights outlined above were particularly useful to the project when scaling up from a few districts to the whole state of Bihar, because it is often these nuances – the quality of the frontline workers, the drive to tackle the wider context, and to honor and repeat ‘failures’ – that are compromised as the program scales up and struggles to adapt to the needs of a larger base of participants.