Development practitioners often need quick data on outcomes and processes that relate to ongoing project implementation. The way “quick data”, is often used is based on ad hoc visits to field sites where things are working well, where project activities started early on, or simply those that are convenient to visit. Similarly good monitoring systems are extremely rare and hard to set up. “Reliable data” such as rigorous impact evaluations take a long time to complete and typically come at the end of project.
Quick Data can, however, become more reliable and useful if it uses appropriate tools and covers an accurate sample of field sites. It can be a valuable tool to aid project learning and adaptation if it is used to answer routine questions that practitioners face during the early stages of project design and implementation. And it can be used as a tool to adapt complex and large-scale interventions, which have high variability in local contexts and unpredictable change-trajectories but poor data systems.
A central challenge for the complex “livelihoods” projects that we worked with was to translate design into outcomes by getting field implementation right. They therefore needed data to assess if implementation reflected design principles, and if it was appropriate to field conditions.
These projects also often used small pilots to design new interventions. Quick data on processes and outcomes- like in a beta test during product development- could help assess both the potential for the pilot to become an intervention, and inform its design and field implementation.
While quick data cannot substitute for systematic learning on project design and implementation, it can complement this learning. In cases where there is very little data that is credible and systematic- which is not uncommon-quick data may be the only feasible way of reliable learning. Over the last three years, we worked with our project counterparts to meet multiple Quick Data needs, by helping them identify both tools and field samples that were most appropriate to the question at hand.
- We designed a case study to investigate if a (one billion USD) project that was designed to be context-driven was in fact being translated into a uniform mandate in the field.
- A case study was also used to examine if the implementation arrangements for pilot mental health intervention could get its delivery right.
- We designed rapid household surveys to beta-test two design pilots. The first of these tracked take-up of an employment scheme following a pilot information intervention. And the second tracked improvements, if any, in the supply of food grains when community based organizations started managing a public food program. Since operational conditions ruled out randomization, the beta-test compared pilot areas to (propensity score) matched non-pilot areas.
- Where available, credible monitoring data can also be useful to assess project implementation. For example, descriptive analysis of monitoring data from a large-scale skills training intervention was used to assess if the intervention had met its poverty and gender targets