By Claudia Williamson, Post-Doctoral Fellow, Development Research Institute
Rhetoric on “aid effectiveness” keeps escalating, is there anything to show for it?
The past (almost) two years, Bill and I have been collecting data, combing through that data, and refining the numbers to ‘grade’ aid agencies and assess overall trends in aid practices. We waited until our paper passed peer review to release our findings. Rhetoric versus Reality: The Best and Worst of Aid Agency Practices has now been accepted for publication in a special issue of World Development. 
Our work updated Easterly and Pfutze’s 2008 study, Where Does the Money Go: Best and Worst Practices in Foreign Aid, on five dimensions of agency ‘best practices’: aid transparency, minimal overhead costs, aid specialization, delivery to more effective channels, and selectivity of recipient countries based on poverty and good government. Based on these measures, we calculate an overall agency score using original data and 2008 OECD data. These scores only reflect the above practices; they are NOT a measure of whether the agency’s aid is effective at achieving good results.
There is slight improvement in transparency and more donors are moving away from ineffective channels. But transparency is still at unacceptably low levels. For example, two agencies (MOFA Japan and France’s DgCiD) fail to report any aid data at all.
The most conspicuous failures in both trends and levels are in specialization and selectivity. Luxembourg is as unspecialized as the US with a 70th of the aid flow. Many such unspecialized small donors likely have most of their aid eaten up by fixed costs before the funds reach any beneficiaries. At the same time, allocation to corrupt countries is increasing, not decreasing. Aid to corrupt autocrats is not explained by emphasis on the least developed countries; donors such as the US, Sweden, and Norway do poorly on both income selectivity and autocracy/corruption selectivity.
DFID is one of ten agencies that fully reports aid flows to OECD, and it lists number of staff, administrative costs, salaries and benefits and its ODA budget on its website. DFID also has relatively low administrative costs and salaries and benefits relative to aid disbursements (2.6% and 1.6% respectively). DFID relies on more effective channels of aid disbursements, not tying any of its aid and dispersing relatively little food aid (1.3%) (pages 53-54).
Japan, New Zealand, and Germany also do well, rounding out the top five best agencies. The United States ranks below average mainly because of poor performance on selectivity and choosing to allocate aid through ineffective channels. As we write in the paper, “the foreign policy needs of the US superpower and the lobbies for particular aid channels seem to dominate the politics of American aid” (page 54).
Another theme that emerged is that the Scandinavian countries’ reputation of altruism based on aid volume does NOT translate to good practices; they have below average scores on specialization and transparency and are mediocre in the overall ranking.
Lastly, the UN agencies on average are worse than the other multilateral agencies and the bilateral agencies, and the differences are statistically significant. Above all, they are worse on overhead and transparency. On overhead, they have an average ratio of 46 percent of administrative costs to ODA. UNDP reports no data on its operating costs or ODA, now even worse than its minimal reporting in 2008.
The two goals of the paper were to test if: 1) donors’ rhetoric matches reality; and 2) they are making any improvements in doing so. Our answer is no on both accounts.
Postscript: Fortunately, we are now part of a larger community running independent checks on aid. For other recent aid quality exercises, see Birdsall and Kharas, 2010; Knack, Rogers and Eubank, 2010; and Ghosh and Kharas, 2011.