one laptop per child kenya

Best buys meet political realities: The political economy of education research

Article

Published 22.04.25

Why do policymakers choose education reforms that aren’t supported by evidence? And how can researchers work with them to implement interventions with better outcomes? These are thorny questions often faced by education researchers and stakeholders worldwide.

This blog originally appeared on the What Works Hub for Global Education Blog in two parts: Part 1 and Part 2.

An election promise

‘Provide … laptop computers equipped with relevant content for every school age child in Kenya’ was one of the key commitments on education in the 2013 election manifesto from the Jubilee Coalition, a political alliance in Kenya.[1] Its leader, Uhuru Kenyatta, emerged victorious and he ruled the country as President until 2022.

Delivery of this project started in 2015, with $600m committed. With most of the funds spent, the project was put on hold in 2019. It was subsequently abandoned and written off as a failure.[2]

The ‘One Laptop per Child’ (OLPC) programme grew out of Massachusetts Institute of Technology (MIT) in 2005. In its early incarnation, it involved simply putting cheap laptops in the hands of children. This version caught the imagination of policymakers and funders alike.

Nevertheless, in 2013, academics had already warned that it was unlikely to work[3], with rigorous evidence from other countries such as China and Peru emerging that distributing laptops had no positive impacts on learning.[4]

A wealth of evidence in Kenya

It is puzzling that Kenya took this route, as plenty of education research had built up much relevant evidence to innovate in its education system. By the time President Uhuru Kenyatta came to power, Kenya was one of the African countries with the most evidence on ‘what works’ in education, and what does not work.

Well-known studies published in the best journals had looked at, among others, the learning and other impacts of contract teachers, better teacher incentives, scholarships, physical inputs, school meals and health interventions such as deworming.

Much of the work involved Michael Kremer, and it was quoted in the citation for his Nobel Prize in 2019[5]. Those involved tried to get this emerging research from Kenya and elsewhere in good time to the attention of senior officials in Kenya – with, for example, special notes prepared for the Prime Minister’s Office.[6] In their careful advice, ‘One Laptop per Child’ was not mentioned as a ‘what works’ intervention.

The fact that much of this work was with NGOs and not with state schools may have made these senior officials less interested – maybe in some cases even for justifiable reasons, as it was argued later on.[7]

A highly promising alternative

Officials had fewer excuses to ignore the evidence generated in close collaboration with the Ministry of Education (in collaboration with the UK’s Department for International Development and USAID) by the Kenya  Primary Math and Reading (PRIMR) Initiative.

The PRIMR programme was an applied research programme in 1,384 schools, focused on improving English, Kiswahili, mother tongue literacy and mathematics skills of children in grades 1 and 2, which, by 2013, had already started to show rigorous evidence of the impact on learning outcomes.[8]

Still, I recall the most senior officials of the Ministry of Education telling me in 2014, all attention had to go on the Kenyan version of ‘One Laptop per Child’ as it was the President’s priority.

Meanwhile, the rigorous evidence on the relative success from PRIMR accumulated further, and it was subsequently further scaled as Tusome.[9] Despite continuing enthusiastic donor support, it never got the same political or equivalent financial attention as the President’s programme. Since 2015, the Government has added $62.5m to the donor resources – still much less than it spent on the ‘One Laptop per Child’ programme.[10] By the time USAID’s role ceased in 2022, some of the early gains may have been lost[11], although overall, it still is one of the few education programmes in Africa that can be seen to have been impactful at scale.[12]

Explaining the choice

So why did President Kenyatta ignore ‘what works’ and pick a programme that was proven not to work? Why not settle for the rich evidence, either presented by top academics to his officials, or generated with the close collaboration of his own Ministry?

I cannot be certain, but it is unlikely that President Kenyatta and his advisors were just foolish and ignorant. I doubt that better and glossier policy briefs by ever larger comms teams would have had more impact on selling the academic research. I also doubt that more Ambassador visits to the President-elect would have swayed him from going for the laptops, in favour of the evidence-based advice from PRIMR.

I suspect that the answer likely will include that the evidence did not work for him and his political allies, and he ended up picking something from his list of best buys in politics, even if it ended up costing $600m.[13]

Some have argued that this project was just ‘an ego project since the leaders are basically trying to outdo their opponents by showing that they can meet whatever promises they had made to the electorate’, even if it needs a miracle.[14] ‘One Laptop per Child’ definitely was not based on evidence, but, as one has seen across the world, on an attractive political narrative that alludes both to something ‘innovative’ and to a direct ‘asset transfer’ to the children and their families.

What can researchers do?

Where does this leave the education researcher, patiently and rigorously trying to figure out how to boost learning by another fraction of a standard error? How can they have any impact if the choices of the policymakers they must deal with are not driven by a drive for outcomes?

In a recent article, I tried to characterise how one could try to deal with a policymaker who does not care about development, or at least cares about (too) many other things, such as staying in power or rewarding friends and allies with jobs and contracts. What does this mean for research and the advice one gives?[15]

The problem can be easily extended to education research and evidence: how to deal with decision-makers who are not interested in evidence of what works for learning or at least need to weigh what they decide with vested interests in keeping power and delivering for those that keep them there, from contractors to teacher unions or just cronies. In short, how do you appeal to decision-makers whose objective is not simply boosting learning outcomes?

Potential responses

One response is the most common: do the research as if the politics of research uptake and implementation does not exist. If one cares about real-world impact, this is a naïve approach. Nevertheless, it can be consciously naïve: it is worth knowing how to improve pedagogy through more structure or how cheap teaching assistants can do a far more cost-effective job (even if the local teacher unions will make sure your intervention will never see the light of day in any state school).[16]

This approach stops neither academic publications nor a successful academic career. However, if one cares about actual impact at scale, it surely is not simply naïve but also deeply disappointing to spend years designing interventions ‘that work’ but that will never be taken up.

The lack of impact stems here from misaligned objectives: the researcher and intervention designer want to boost learning outcomes, and the policy decision-maker and implementer may like better learning outcomes but also have other objectives and constraints that drive their actions. Impact clearly demands understanding these objectives and constraints, or interventions may never be implemented.

Therefore, the key question is this: how do you marry researchers’ and policymakers’ interests in one intervention?

Crafting effective interventions that policymakers want

So far, I have explored how President Kenyatta of Kenya chose ‘laptops for all’ as a campaign promise even though evidence supported a different education intervention.

My conclusion is that he did not make this choice because he was unaware of the evidence. Rather, the ‘laptops for all’ policy was a rational decision to meet his political objectives. It was a ‘best buy’ for politics rather a ‘best buy’ for education.

So, how do we create interventions that meet both researchers’ and policymakers’ objectives? How do we design interventions that have the highest chance of being adopted and implemented by policymakers?

The obvious answer is to design those interventions that maximise the policy makers’ objective function given their constraints. For those researchers locked in research contracts from certain donor agencies that must show ‘impact’, this surely is the route they go.

Obviously, getting a senior decision-maker to reveal all and ‘tell you what I want, what I really really want’ is not self-evident. To do this, researchers should invest in fully understanding the objectives of the senior decision-makers and not simply their stated objectives in speeches and roundtables.

Suppose decision-makers would value boosting educational opportunities and outcomes but also offer something tangible that voters can relate to. An example from Kenya was the abolition of school fees. This was the platform for Mwai Kibaki in the 2002 election: free primary education for all. Its subsequent implementation from 2003 has been fraught, but overall, the assessment still is that it improved average outcomes for children and that it was pro-poor, even if in practice it hasn’t quite worked in the way it said on the tin of the election promise.[17]

So, what does this mean for the researcher? ‘Free education’ can be considered a happy marriage between a pro-poor education policy as well as a best buy for politics. Researchers keen on boosting outcomes, surely can be celebrating the increased access, but then be really impactful by trying to focus on evidence on how to make this work: for example, how to optimise stretched resources by encouraging the most cost-effective ways to teach when class sizes get bigger. I am surely not alone in finding, in such circumstances, willing Ministry of Education officials or head teachers who would be interested in scalable solutions to their pressing needs.

What if researchers’ and policymakers’ needs widely differ?

However, I am not simply cynical when stating that this happy marriage between what the researchers and the policymakers want is not always there. ‘What they really really want’ may be uglier.

Suppose that the senior decision-makers value boosting education outcomes, but mainly need to balance competing objectives: they need something tangible that can appeal to voters, but also appeals to one’s political funders or powerful civil servants. The latter would both value a juicy procurement contract worth hundreds of millions. And the proposed policies also should not upset teacher unions at all. ‘One Laptop per Child’ seems to fit the bill.

So, how can one be a researcher keen to have an impact on education in Kenya? One reaction could be to stay away from impact-focused work and revert back to the first case: research alternative education policies, feeding them via policy briefs and events to those who should act on it, but with little hope that any advice is taken, as all attention will go to those programmes that are the best political buys in the circumstances.

The more ‘impactful’ alternative is to help the senior policy makers with research that investigates how to win votes with education announcements, staying away from anything that may upset the teacher unions, and preferably with some silver bullets that require procurement.

Of course, that does not sit well with them as it turns advisors and researchers into mercenaries for less than scrupulous decision-makers. While well-meaning civil servants or internal advisors may not have much choice, outside researchers and advisors surely find it problematic, even if, as in the case of multilateral agencies like the UN or World Bank, their job is to work with government.

What’s left then? Only a choice between being a naïve researcher or advisor with little impact, and being a mercenary if impact on learning outcomes is what is desired and required?

A different view of the issue

I want to argue that there is an alternative. The first step is to recognise that political incentives of decision-makers are what they are – well-meaning objectives mixed with a quest for power and re-election through rewarding supporters with jobs and contracts, or building schools or other investments in loyal constituencies. Not for nothing is politics in a country like Kenya described as ‘competitive clientelism.'[18]

The second step is to accept that it is fine for a researcher to have objectives that are different from those in power and to look for ways in which the best possible learning outcomes can be achieved through research and advice.

This can be cast as a Principal-Agent problem: a conflict of interest that occurs when an agent acts in a way that is not in the principal’s best interests. Maybe somewhat surprisingly, the right way to think about it is to see the Principal here as the education researcher (and all others that seriously want to prioritise boosting learning outcomes), while the Agent is the government led by decision-makers on education policy with less desirable objectives than just boosting learning, and constrained by other vested interests.

How to solve the issue as a Principal-Agent problem

The solution to this problem is first to characterise the set of actions that the Agent is willing to take – basically, actions that keep them at least as well off in terms of their own objectives than before, and then for the Principal to look within this set for the best possible programmes and interventions in terms learning outcomes.

In Dercon (2024) this is described and discussed further.[19] In simple terms, it suggests that to be most impactful, we need to be politically-informed researchers, not naïve researchers or mercenaries.

So what could this mean? If the objectives are indeed to look for silver bullets that sell well to voters, that do not annoy teacher unions and that possibly involve some attractive procurement, then the task is hard.

For the sake of argument, could one not have introduced some fast experimentation and added some untested but politically attractive features to Tusome? For example, ‘One Textbook per Child’ plus more focus on the digital aspects of Tusome, including for children directly, building on the focus on teaching materials and digital support through tablet-based monitoring that were already part of it? It may have made the programme somewhat more expensive and less cost-effective but given what happened next, it could have avoided $600m of wasted resources.

Or, researchers could have responded to offer studies and evidence on better procurement of educational materials such as laptops, to bring down wastage in education, or design complementary studies to boost impact from laptops in schools, even if the underlying programme, ‘One Laptop per Child’, was known to be fraught with problems and low impact.

All this may feel like quite a compromise. One objection to these examples is that they treat this interaction between researcher and policymaker as a one-shot game: research for impact must respond to the current reality of politics.

Maybe research can also be a game of strategic patience: research important questions now, even if they are not sufficiently urgent for the political powers to care much about it, but the time for these insights will come. It would mean that there is little point in spending too much time trying to influence current decision-makers if their objectives really are not consistent with what one cares about, but wait for entry points and windows of opportunity in the future. Think of climate researchers in the 1980s and 1990s – their time came later, but it was worth building up climate science as they did.

Taking it further

A committed researcher can, however, go one step further. Rather than taking the political conditions as a given constraint on impact, one could also treat these conditions as something that can be influenced.

For example, if it were possible to make learning outcomes more salient for voters, so that they cannot be as easily bought off by some gimmicky, shiny asset transfer, then it could strengthen those in the local education system who genuinely care about outcomes without upsetting the political leadership. Or one could choose to work on researching and then publicising more broadly how procurement of education inputs is being manipulated and push for stronger transparency; and if this effort creates enough noise, well-meaning politicians will be less constrained by their political funders to provide rewards via education procurement.

The best result is that research may help to shift the objectives being pursued through education policies to be more in line with improving educational outcomes. Of course, there is something subversive about this: picking research to shift the objectives of people in power, often with the legitimacy of the ballet box behind them. And success will depend on the existence of suitable interlocutors that can help with entry points or that can create windows of opportunity.

Some of this will feel uncomfortable for researchers used to the plush surroundings of the ivory tower, and the examples given may be too extreme. But the key point – that to be impactful researchers, they should understand the political reality of educational policy – cannot be dismissed in any country. While it is true that one can ignore the politics of policymaking to conduct good research in any discipline, politics cannot be ignored to conduct good research that is also highly impactful.