Jeffrey Sachs’ Millennium Villages Project has to date unleashed an array of life-saving interventions in health, education, agriculture, and infrastructure in 80 villages throughout ten African countries.
The goal of this project is nothing less than to “show what success looks like.” With a five-year budget of $120 million, the MVP is billed as a development experiment on a grand scale, a giant pilot project that could revolutionize the way development aid is done.
But are they a success? To address that question, we need to know: What kind of data is being collected? What kinds of questions are being asked? Three years into the start of one of the highest-profile development experiments ever, who’s watching the MVPs?
The most comprehensive evaluation of the project published so far is a review by the Overseas Development Institute, a large UK-based think tank. The review covered two out of four sectors, in four out of ten countries, with data collected in the MVs only, not in control villages. The report’s authors cautioned that “the review team was not tasked and not well placed to assess rigorously the effectiveness and efficiency of individual interventions as it was premature and beyond the means of the review.”
Despite this, a Millennium Villages blog entry on Mali says, “With existing villages showing ‘remarkable results,’ several countries have developed bold plans to scale up the successful interventions to the national level.” Millennium Promise CEO John McArthur described Sachs’ recent testimony to the Senate Foreign Relations Committee: “Sachs noted the success of the Millennium Villages throughout Africa and the tremendous development gains seen in the project over the past three years.”
The Evaluation that Isn’t?
In contrast, evaluation experts have expressed disappointment in the results they’ve seen from the Millennium Villages Project to date. This isn’t because the MVPs fail to produce impressive outcomes, like a 350 percent increase in maize production in one year (in Mwandama, Malawi), or a 51 percent reduction in malaria cases (in Koraro, Ethiopia). Rather, it has to do with what is—and is not—being measured.
“Given that they’re getting aid on the order of 100 percent of village-level income per capita,” said the Center for Global Development’s Michael Clemens in an email, “we should not be surprised to see a big effect on them right away. I am sure that any analysis would reveal short-term effects of various kinds, on various development indicators in the Millennium Village.” The more important test would be to see if those effects are still there—compared with non-Millennium Villages—a few years after the project is over.
Ted Miguel, head of the Center of Evaluation for Global Action at Berkeley, also said he would “hope to see a randomized impact evaluation, as the obvious, most scientifically rigorous approach, and one that is by now a standard part of the toolkit of most development economists. At a minimum I would have liked to see some sort of comparison group of nearby villages not directly affected by MVP but still subject to any relevant local economic/political ‘shocks,’ or use in a difference-in-differences analysis.” Miguel said: “It is particularly disappointing because such strong claims have been made in the press about the ’success’ of the MVP model even though they haven’t generated the rigorous evidence needed to really assess if this is in fact the case.”
An MVP spokesperson told me that they are running a multi-stage household study building on detailed baseline data, the first results from which will be published in 2010. The sample size is 300 households from each of the 14 MV “clusters” of villages (which comprise about 30,000-60,000 people each.) She also said that their evaluation “uses a pair-matched community intervention trial design” and “comparison villages for 10 MV sites.”
But Jeff Sachs noted in a 2006 speech that they were not doing detailed surveying in non-MV sites because—he said— “it’s almost impossible—and ethically not possible—to do an intensive intervention of measurement without interventions of actual process.” A paper the following year went on to explain that not only is there no selection of control villages (randomized or otherwise), there is also no attempt to select interventions for each village randomly in order to isolate the effects of specific interventions, or of certain sequences or combinations of interventions.
CEO John McArthur declined to comment on this apparent contradiction. The MVP spokesperson could say only that the evaluation strategy has evolved, and promised a thorough review of their monitoring and evaluation practices in 2010.
Comparison villages could be selected retroactively, but the MVP has failed to satisfactorily explain how they chose the MVs, saying in documents and in response to our questions only that they were “impoverished hunger hotspots” chosen “in consultation with the national and local governments.” If there was no consistent method used in selecting the original villages (if politics played a role, or if villages were chosen because they were considered more likely to succeed), it would be difficult to choose meaningful comparison villages.
Living in a Resource-Limited World
Imagine that you are a policymaker in a developing country, with limited resources at your disposal. What can you learn from the Millennium Villages? So far, not very much. Evaluations from the MVP give us a picture of how life has changed for the people living in the Millennium Villages, and information about how to best manage and implement the MVP.
Sandra Sequeira, an evaluation expert at London School of Economics, sums up the quandary neatly. “Their premise is that more is always better, i.e. more schools, more clinics, more immunizations, more bed nets. But we don’t live in a world of unlimited resources. So the questions we really need to answer are: How much more? Given that we have to make choices, more of what?”
These are tough questions that the Millennium Villages Project will leave unanswered. For a huge pilot project with so much money and support behind it, and one that specifically aims to be exemplary (to “show what success looks like”), this is a disappointment, and a wasted opportunity.