tracking impact in development

Tracking impact in development

VoxDev Blog

Published 09.04.25

Insights from the work of J-PAL, BII, and DIME sheds light on how organisations track and evaluate the impact of development interventions at scale.

In this VoxDev webinar, in collaboration with J-PAL, Paddy Carter, Aparna Krishnan and Arianna Legovini joined Managing Editor Oliver Hanney to discuss insights and reflections on tracking impact based on their experiences across research and policy. You can catch up on this event in the video below, and we have summarised the key takeaways in the rest of this blog post. We received many more questions than we could get round to – where possible, we have highlighted relevant reading that provides answers to these.

Moving from individual interventions to impact at scale

Aparna Krishnan described J-PAL’s journey from generating evidence using randomised evaluations of individual interventions to synthesising and disseminating insights to help governments, NGOs, donors and the private sector apply evidence from randomised evaluations to their work to address some of the most pressing questions in international development.

While initially focused on identifying specific effective interventions, J-PAL has now expanded to emphasise both aggregate effects, from a portfolio of evaluation studies, and the value of adopting an evidence-informed approach to decision making at scale. 

"With this large and growing database, it lets us distil insights on specific topics and sectors… so when this is combined with local context… our hope is that it starts adding up to actionable insights." Aparna Krishnan

J-PAL largely thinks about tracking organisational impact of evidence-informed policy making at three levels:

  1. The intervention level: There are a number of potential solutions, what should we be doing? To compare the impact of the same programme across different contexts, or to compare different programmes that aim to achieve the same outcome, J-PAL uses cost-effectiveness and cost-benefit analyses to report impact using standardised metrics or the monetised value of benefits of one or more outcomes programmes evaluated in relation to their costs.
  2. The portfolio level: What happens to a cluster of evaluation studies we’re undertaking? J-PAL tracks this using a simple measure of the number of people who have benefitted from policies and programmes informed by the adoption insights from evaluations at scale. More nuanced efforts to assess the impact of a portfolio of investments into a research initiative include estimating benefit cost ratios using a social return on investments approach. For instance, J-PAL’s Innovations in Government Initiative (IGI) has been recommended as a high impact funding opportunity.
  3. The system level: Are there changes in the way decisions are being made, based on the availability of improved evidence? Metrics to measure institutional change are more difficult to measure quantitatively and track. J-PAL is exploring the use of relevant metrics across government partnerships, such as the demand for and use of evidence to inform specific policies/programmes; allocation of resources by governments to implement evidence-based programmes at scale; investments to strengthen government capacity (staff, technology, infrastructure etc.) for monitoring and evaluation, data quality and use, research studies, and evidence use; and public statements by stakeholders championing the use of data and evidence in policy making, among others.

"It really comes down to saying, what decision are we trying to inform, and that should inform the methods and the metrics that we're using, and ultimately how we interpret those insights." Aparna Krishnan

Tracking impact at Development Finance Institutions

Paddy Carter highlighted the distinct challenges faced by development finance institutions (DFIs) compared to grant-giving research organisations. BII invests equity and debt in private enterprises, complicating direct impact evaluations due to limited control over investees and data access challenges.

"When you are an investor in a private business, it is not often possible to design research in a way that allows identification of causal effects." Paddy Carter

Therefore, DFI’s are more in the realm of “impact monitoriting”, but a lot can still be learned by digging into the details of what a business does. BII achieves this through comprehensive monitoring plans, annual surveys, and evaluations. Carter emphasised the balance required between meaningful impact tracking and minimising reporting burdens on investees.

"Depending on who you speak to, DFIs like us are either collecting not nearly enough information about the impact that we have, or we're collecting far too much and placing too much of a burden on the businesses that we invest in." Paddy Carter

20 years of evaluating impact at DIME

Ariana Legovini outlined the evolution of DIME’s approach over the last two decades, emphasising the integration of advanced technologies like AI, behavioural science, and rigorous RCT methodologies. Initially focused on embedding scientific research directly into World Bank operations, DIME now prioritises real-time policy influence through innovative tools and methods.

"AI helps us move much faster and also smarter, and behavioural science ensures that we stay grounded in human behaviour." Arianna Legovini

Legovini provided examples of impactful projects like a zero hunger initiative, where predictive AI significantly improves humanitarian aid efficiency, and a groundbreaking effort in Nigeria reducing hate speech through AI-driven social media interventions.

Importance of learning from failures and complex systems

Aparna Krishnan stressed that understanding failures provides essential insights into implementation and contextual factors, reinforcing evidence-based policymaking.

"Things that don’t work are often as helpful to learn from as things that work." Aparna Krishnan

The panel also discussed the challenges of assessing impacts in complex environmental and climate-related contexts, advocating for diverse methodologies, extensive data analytics, and patience in realising systemic impacts.

Real-time tracking and adapting interventions

There is a particular need for improved real-time impact tracking mechanisms during programme implementation. For example, Paddy Carter explained how DFIs utilise continuous monitoring to manage investments proactively, while Ariana Legovini described DIME’s innovative combination of AI and behavioural science to adapt interventions dynamically.

"Most of development for the greatest number of years happened with eyes closed, jumping in the dark… what we are here for is to improve the chances that what we invest in has good results." Arianna Legovini

Building institutional capacity and systemic change

A recurring theme was the critical role of building institutional capacity within governments and organisations to effectively use evidence-based approaches. Aparna Krishnan highlighted ongoing efforts at J-PAL to foster strategic, long-term government partnerships, aiming to institutionalise evidence-based decision-making.

"We look at demand for evidence from government… allocation of resources by governments to evidence-based programmes." Aparna Krishnan

Legovini complemented this by emphasising the need for providing accessible and actionable evidence not only to policymakers but also to broader societal stakeholders, creating wider channels of accountability.

Looking forward: Leveraging evidence for impact

The webinar concluded on an optimistic note, with panellists expressing excitement about future opportunities for impact evaluation. Innovations in data collection, AI, and increased global connectivity promise enhanced effectiveness and scalability in tracking development outcomes.

"We really have an opportunity to make true to our commitments to improving the lives of people all over the world." Arianna Legovini

Recommended reading on tracking impact

There were a number of questions on the specific methods of the organisations represented on the panel, you can learn more about their methods here: Development Impact (DIME) World Bank (About our model), British International Investment (How we learn) and J-PAL (Evidence to policy).

On how well impacts generalise across settings, make sure to check out J-PAL’s resources explaining the generalizability framework, which can serve as a starting point, and an approach to, cost-effectiveness and welfare approaches on practical cost considerations, and as an input to policymakers seeking to act on rigorous evidence.

On cost-benefit analyses, check out this VoxDev Blog: COST-benefit analysis: What we are learning about what we were missing.

On the political economy of trying impact decisions, check out this podcast by Stefan Dercon: How should economic researchers give policy advice?

On the assessing impacts on climate related outcomes such as biodiversity, and the challenges of tracking complex environmental systems, check out this recent podcast with Eyal Frank on The economics of ecosystems: How nature and economics interact.

When RCT’s are not possible, economists have a range of other tools they can leverage. This recent blog summarises some of this work that we have featured on VoxDev: Nine examples of successful government policies.