Skip to content

Millennium Villages: don’t work, don’t know or don’t care?

UPDATE 10/16 12:25PM:  Tim Harford in FT also covers Clemens and Demombynes paper and gets response from Sachs.

In a new paper, Michael Clemens and Gabriel Demombynes ask:

When is the rigorous impact evaluation of development projects a luxury, and when a necessity?

The authors study the case of the Millennium Villages, a large, high-profile, project originally meant to demonstrate that a package of technology-based interventions in education, health and agriculture could lastingly propel people living in the poorest African villages out of poverty within five (now ten) years.

One way Clemens and Demombynes get at their central question is to examine how the Millennium Villages are (so far) being evaluated, and ask whether a more rigorous method of evaluation would be 1) feasible and 2) likely to yield very different results. They answer 1) yes and 2) yes.

They start by looking at the findings of a Millennium Villages midpoint report released last summer, which shows movement on indicators (higher crop yield, more cell phone usage, fewer cases of malaria, etc.) against a baseline of data collected in those same villages three years prior. In the graph of cell phone ownership in Kenya below, this progress is charted by the black line.

Clemens and Demombynes then put this data in the context of how non-Millennium villages in the same country and region are faring on these same indicators, using publicly available data from national surveys. These are the red, blue, and green lines in the figure below.

What is going on in non-Millennium Villages in Kenya to drive up the number of cell phone users? Conventional wisdom is that it’s driven in very small part by outside aid and in large part by entrepreneurs, small and large. (There are a whole series of these graphs included in the paper, many show more improvement in MVs than in comparators, while a few show worse performance in MVs.)

The paper goes on to describe the weaknesses in the MVP’s published plans for future evaluations, which do involve comparison villages, and suggests how future waves of MVP interventions could be more rigorously evaluated without spending a lot more. (Summary here.)

The MVP team responded to the critique, saying that it “misunderstands the MVP’s aims and evaluation methods.” They shift away from portraying MVP as a demonstration project: the “primary aim” is “to achieve the Millennium Development Goals in the Project sites.”

This entry was posted in Metrics and evaluation and tagged , . Bookmark the permalink. Follow any comments here with the RSS feed for this post. Both comments and trackbacks are currently closed.

9 Comments

  1. David Zetland wrote:

    Let me fix that: the “primary aim” is “to keep funding our projects and evaluations”

    These guys need to STOP moving the goal posts and START doing real randomized trials (or something LESS Potemkinesque).

    Posted October 15, 2010 at 12:33 pm | Permalink
  2. Dembem wrote:

    I know the everyday prayer at AidWatch is that the MVP fail woefully. You regularly loose your sense of objectivity when writing about the MVP. Other figures in the report paint a better picture of the project than what you have here. A clear case of selective reporting. I do take you seriously, but not when it concerns things around Jeffrey Sachs. You will never write one positive thing about his work. It’s shameful at times.

    Posted October 16, 2010 at 7:29 am | Permalink
  3. mike wrote:

    5 to 10 years is a pretty hefty goal IMO, It will take alot of funding and help to reach those goals for poverty.

    Posted October 16, 2010 at 10:20 am | Permalink
  4. Kim wrote:

    @dembem

    Your statement that aidwatch’s daily prayer is for the MVPs to fail belies your own implied objectivity. Furthermore, if you want to convince me that they have not been objective or have been selective, then cough up some evidence. Personally, I am hard pressed to be convinced that Jeff Sachs is doing good. I don’t think he is evil, but I do think a lot of what he promotes is counterproductive to development. He glosses over the root causes of poverty, he promotes generalized, technocratic and neoclassical solutions to context specific and complex problems, he is widely dismissive of the rabid political context in which we live and he tries for popular over substantial communication (which I think is a combination of his worldview and the fact that he seems to lack a critical thinking bone in his body). So the fact that aidwatch continues to hold him and his work to account can easily be read as legitimate accountability action. If thinkers ontologically and epistemologically disagree, then they aren’t going to easily concede.

    Finally, it is lose, not loose. It baffles me that so many people confuse these two words!

    Posted October 16, 2010 at 12:05 pm | Permalink
  5. d Shea wrote:

    I looked at the graphs and found they needed too much interpretation. Scales are different for nearly every one making comparisons difficult, also many don’t start at 0. Another misleading factor is that there’s 2 data points connected by a line which implies information related to the intervening years that may not be true.

    Posted October 16, 2010 at 12:43 pm | Permalink
  6. Francis wrote:

    Are these trends statistically significant ?

    Posted October 16, 2010 at 4:25 pm | Permalink
  7. Gabriel Demombynes wrote:

    @d Shea: Thanks very much for your comments. I’m not sure what you mean by “needed too much interpretation.” Graphs need interpretation.

    It doesn’t matter that the scales on the graphs are not the same, since we’re not comparing the lines across countries. We only compare the trend lines within graphs. For this purpose, it makes sense to create a scale which is the most appropriate to the individual graph, rather than try to make the scale the same across different graphs.

    On a lineplot, you don’t need to start at zero. Andrew Gelman has a relevant discussion on the use of lineplots here:
    http://www.stat.columbia.edu/~cook/movabletype/archives/2010/09/thinking_outsid.html

    A major point of the paper is to compare the trends at the MV sites to those in surrounding areas. That is why it makes sense to draw the lines showing the trends. A linear trend assumption is the simplest. We discuss this assumption in the paper and explain why if anything it produces an overly favorable estimate of the MVP’s effects (given the interpolation we had to do, the most likely alternative trend assumption would produce a lower diff-in-diff estimate).

    Posted October 17, 2010 at 11:44 am | Permalink
  8. Richard Morgan wrote:

    Has anyone done any kind of cost analysis or cost-effectiveness analysis of the Millennium Villages? The Clemens and Demombynes paper is convincing but even they don’t mention the need to determine what the progress in the MVs has been costing, its cost-effectiveness compared to other development strategies .. and what it would cost to replicate. Does the MVP provide cost data on what they’ve been spending? Any thoughts on this issue?

    Posted October 17, 2010 at 3:09 pm | Permalink
  9. Laura Freschi wrote:

    Richard,

    As far as I know there has been no cost-effectiveness analysis of the MVs.

    MV does publish a general costs breakdown on their website; see here.

    It tells us:

    There are two phases of the Millennium Villages project, each with its own funding requirement. Phase I includes years 1-5 of a village and Phase II includes years 6-10. When many of the Millennium Villages were launched in 2006, they were funded for their first five years by an original Phase I budget that was determined in 2004. It called for a total of $120 per person per year, broken out as follows:

    Millennium Promise, $60
    Local and National Governments, $30
    Partner Organizations, $20
    The Community, $10
    Given that there are approximately 5,000 people per village, the Millennium Promise-mobilized financing need for these villages is about $300,000 per Millennium Village per year.

    In 2009, Millennium Promise reevaluated the Phase I budget in light of inflation, lessons learned, and the effects of the 2008 energy price spike. The Phase I budget for a Millennium Village was then adjusted to $160 per person per year, with the following revised allocation:

    Millennium Promise, $80
    Local and National Governments, $40
    Partner Organizations, $30
    The Community, $10
    This revised budget which totals about $400,000 per person per year [sic, I think they mean per village per year] is only applicable to Millennium Villages that receive new donor support. Consequently, most Millennium Village budgets remain at the $120 per capita level. Millennium Promise is currently planning for Phase II of the MV project, which will have reduced Millennium Promise-mobilized support.

    Michael Clemens has calculated that the MV intervention amounts roughly to a doubling of yearly per capita income in each village.

    Posted October 19, 2010 at 12:39 pm | Permalink

3 Trackbacks

  1. [...] This post was mentioned on Twitter by Vijaya Ramachandran, OpenEye Group, Michael Clemens, Conduit Journal, Aid Watch and others. Aid Watch said: Today's post: Millennium Villages: don’t work, don’t know or don’t care? http://bit.ly/cxvLTG [...]

  2. [...] Millennium Villages: don’t work, don’t know or don’t care? by  Laura Freschi | [...]

  3. [...] paper has generated much discussion, including reports in the Financial Times and in a major newspaper in Kenya. The Project itself has [...]