We’ve had a lot of very heated debates on this blog about the uses and abuses of global statistics—most recently on estimates of poverty, maternal mortality, and hunger—with a certain senior Aid Watch blogger inciting the ire of many (not least those who produce the figures) by calling them “made-up.”
A new study in the Lancet about the tragic problem of stillbirths raises similar questions: If stillbirths have been erratically and inconsistently measured in the past, especially in poor countries with weak health systems, what then are these new numbers based on?
Of the 193 countries covered in the study, the researchers were able to use actual, reported data for only 33. To produce the estimates for the other 160 countries, and to project the figures backwards to 1995, the researchers created a sophisticated statistical model. 
What’s wrong with a model? Well, 1) the credibility of the numbers that emerge from these models must depend on the quality of “real” (that is, actual measured or reported) data, as well as how well these data can be extrapolated to the “modeled” setting ( e.g. it would be bad if the real data is primarily from rich countries, and it is “modeled” for the vastly different poor countries – oops, wait, that’s exactly the situation in this and most other “modeling” exercises) and 2) the number of people who actually understand these statistical techniques well enough to judge whether a certain model has produced a good estimate or a bunch of garbage is very, very small.
Without enough usable data on stillbirths, the researchers look for indicators with a close logical and causal relationship with stillbirths. In this case they chose neonatal mortality as the main predictive indicator. Uh oh. The numbers for neonatal mortality are also based on a model (where the main predictor is mortality of children under the age of 5) rather than actual data.
So that makes the stillbirth estimates numbers based on a model…which is in turn…based on a model.
Showing what a not-hot topic this is, most of the articles in the international press that covered the series focused on the startling results of the study, leaving aside the more arcane questions of how the researchers arrived at their estimates. The BBC went with “Report says 7,000 babies stillborn every day worldwide.” Canada’s Globe and Mail called stillbirths an “epidemic” that “claims more lives each year than HIV-AIDS and malaria combined.” Frequently cited statistics included the number of stillbirths worldwide in 2009 (2.6 million), the percentage of those stillbirths that occur in developing countries (98%), the number of yearly stillbirths in Africa (800,000), and the average yearly decline in stillbirth over the period studied (1.1 percent since 1995).
Only one international press article found in a Google search, by AP reporter Maria Cheng, mentioned the possible limitations of the study’s estimates. Not coincidentally, that article interviewed a source named Bill Easterly.
Despite the disinterest of the media, this is a serious problem. Research and policy based on made-up numbers is not an appealing thought. Could the irresponsible lowering of standards on data possibly reflect an advocacy agenda rather than a scientific agenda, or is it just a coincidence that Save the Children is featured among the authors of the new data?
1. From the study: “The final model included log(neonatal mortality rate) (cubic spline), log(low birthweight rate) (cubic spline), log(gross national income purchasing power parity) (cubic spline), region, type of data source, and definition of stillbirth.” ↑