FE43DF93-13CD-493E-BEF2-A77A9E7D6926 Copenhagen Consensus Center Logo
Copenhagen Consensus Center

Post-2015 Consensus: Data for Development Assessment, Jerven

Assessment Paper

Summary of the most high-yielding targets from the paper

Data for Development Target Benefit for Every Dollar Spent
Enable the High Level Panel’s data revolution for the OWG’s 17 goals and 169 targets Likely to be < $1

Summary

The UN High Level Panel has called for a data revolution. The world's population should be counted, measured, weighed and evaluated. This information should be collected, compiled, aggregated, and presented in such a form that it can usefully inform policy makers and citizens in aggregated forms, and disaggregated according to region, village, gender and population group.

It is tempting to think that having the correct information will improve policy choices, but there is no automatic connection. This paper therefore focusses on the cost of the data revolution rather than its possible benefits. The simple starting point is that data do have a cost.

The baseline for any surveys and comparisons is a population census. If this costs roughly $1 per head, then a worldwide census in 2015 would require $7 billion, which is equivalent to a quarter of the USAID budget or the entire combined aid budgets of Norway and Denmark. Even that may be a conservative figure. Although the cost of a census in India is 40 cents per capita and $1 in China, the last census in America cost $13 billion, or almost $42 a head. The cost of the data revolution will be considerable, but has been missing from the MDG debate so far. As well as the financial cost of monitoring, there is the opportunity cost resulting from competing demands on survey capacity. Particular indicators also have an influence on behavior by skewing activities in favor of completing goals with quantifiable targets.

The MDG agenda comprised an ambitious list of eight goals and 18 targets, with no clear idea of where data would come from. 48 indicators where then designed by a technical group led by the World Bank, but there has never been an analysis of how much these data would have cost to provide, nor whether the agreed goals were either feasible or the best use of resources. This paper aims to shift the discussion in this direction for the current round of target-setting: the Open Working Group proposed 17 goals and 169 targets and the final list will undoubtedly be quite lengthy.

If the cost of providing data is about $1.5 billion per target over the post-2015 period, as I estimate below, then this suggests a cost of $254 billion to provide data in support of the new targets, almost two times the total annual spend on Official Development Assistance. Considerable trimming is needed unless the Open Working Group thinks it is a good idea to spend up to 12.5% of the annual aid budget on statistics.

While the Millennium Development Goals were set with little thought about where the information would come from, with the primary question being ‘what kind of development should we target?’, this paper suggests the right question should be ‘what kind of development are we able to measure?’ Simply demanding more data without being prepared to bear the cost and responsibility could be a ‘tragedy of the commons’ for statistical services.

More data are only better data if they contain meaningful information and there are no opportunity costs to their supply. The funding available for MDG measurement is very limited and every effort must be made to ensure it is spent responsibly. This paper brings together and reconciles the existing measurement types as a reference for scholars concerned with MDG feasibility and operationalization.

How much would the previous MDGs cost?

Although it is possible to download MDG reports with country data in them, the contents are more often projections and estimates than real data. There are more gaps than real observations, and many of the observations are themselves of dubious quality.

The previous MDGs have been criticized because of their use of absolute measures which failed to take account of relative gains in many countries and also because of the apparently arbitrary nature of the indicators. There have been calls for the inclusion of groups not represented in the surveys, including street children and people in institutions. One UN suggestion is to address inequalities through:

“Setting tailored targets and disaggregating data in order to address inequalities within all goals, targets and indicators: Disaggregation of data will help measure the gaps between social and economic groups and identify who is being left behind. Setting targets to reduce these gaps (e.g. in health and education outcomes, in incomes and employment) will ensure that the most deprived are not “left until last”. This will further help to focus attention on and address direct and indirect discriminations between groups that underpin inequalities. Data should be disaggregated by at a minimum by age, sex, location, ethnicity, income quintiles and disability.”

But here it is suggested that ambition should be tempered by moderation and an appreciation of the resources needed to supply the data demanded.

Looking now at what the data for the previous MDG would have cost, we come first to the need for a baseline set by a census every ten years. This would be supplemented by an annual smaller survey to measure progress and more sizeable surveys every five years to ensure reasonably accurate reporting. According to the best estimates available, the cost of solely supporting the MDG surveys from 1990 to 2015 would have been $27 billion.

In practice, although we demand data on, for example, poverty headcount to be available and assume they are, six of the 49 countries in sub-Saharan Africa have never had a household survey and only 28 have been surveyed in the last seven years. Only about 60 countries in the world have the registration systems required to monitor basic trends in social indicators. The even more ambitious agenda now being put forward would either widen the gap between ambitions and realistic achievements or the allocation of development spending on statistics will have to be dramatically increased.

Methodology

The figure of $27bn was reached by estimating the costs of providing annual data from the most widely used survey methods, in addition to establishing benchmark data with a population census. However, the costing is based heavily on guesswork and extrapolation from known costs, because so little relevant information is in the public domain. We have also assumed, unrealistically, that there is sufficient existing statistical capacity in developing regions to cope with the work, so the estimate should be seen more as a marginal case. For these and other reasons, the final estimate is conservative, but still high enough to make reconsideration of the data demands in the post-2015 debate necessary.

We need to distinguish between two types of data: administrative and survey. Administrative information is that which is regularly collected by governments in day to day operations. The cost is borne by the government and the marginal cost of supplying such data is not included in our calculation. One of the objectives of the data revolution might be to shift the balance in data collection from survey to administrative, but this would itself require greater capacity in the offices dealing with it. OECD figures show that $2.3 billion was allocated to statistical development in 2010-2012. If the annual average amount had been the same over the period 1990 to 2015, the total cost would have been $19 billion.

Of the 60 MDG indicators in use in 2008, 21 were compiled wholly or partly administratively and 49 wholly or partly via surveys. The balance tends to vary from country to country, with data objectivity generally believed to be higher in survey data. Improvements in poor countries often tend to be overstated in the administrative data. However, countries in the GDP bracket below $1500 per capita will have great difficulty in supplying the resource-intensive survey data without direct donor interest and funding.

Ignoring the administrative costs, the focus is on costing the required surveys and population censuses. The suggested minimum data requirements are:

  • Population census every 10 years
  • Demographic and Health Surveys every 5 years
  • Living Standards Measurement Study every 5 years
  • Core Welfare Indicator Questionnaire annually

Finding figures on which to base a cost estimate was challenging, with figures for the different survey types being largely unavailable or undisclosed. Generally this is either because the information is considered commercially sensitive or financial records were never kept because of various in kind contributions. We wanted to include an annual Multiple Indicator cluster survey in our proposed package but this lack of financial information meant we could not estimate the cost. There remain a number of caveats to this research, but I still maintain a reasonable level of confidence in this ‘back of the envelope’ analysis.

Results

Population size is the main determinant of cost for both censuses and surveys. For small (less than 5 million population), medium (5-20 million) and large (more than 20 million) countries, we assume a cost of conducting a census of $1, $2 and $3 per head respectively. Costs of the other surveys would vary with country size, but all fall into the range of hundreds of thousands to $1.5 million per country per survey. As much as possible of this is based on actual costs, but in other cases it has been necessary to make estimates and extrapolations. From this we derive an overall cost estimate of $27 billion.

This total overshoots what is currently earmarked for statistics in development assistance by a considerable distance, but could still be an underestimate. Countries such as Sudan and the Democratic Republic of Congo may prove more costly to survey and the need for administrative data is not included for any country. Most importantly, there is no allowance for maintaining statistical offices and building the necessary capacity. The gap between demands and capacity is likely to be further stretched by the new list of post-2015 targets and it seems likely that an (inadvisable) ad hoc approach will be taken to determine what is needed before considering how it might be paid for.

The benefits of good data and the costs of bad data

It is not feasible to analyse the potential upside and costs for every target suggested by the Open Working Group, but it is possible to suggest a broad framework for thinking the issues through.

Good data has real benefits. For example, surveys in Uganda between 1991 and 1995 showed that only 13% of government funds allocated to primary schools actually reached them. An advertising campaign in local newspapers enabled schools to check what they should have been receiving and, by 1999, 90% of the funds reached their destination.

But such data are unlikely to come from monitoring of post-2015 development targets. Governments need disaggregated, high frequency data linked to accountable administrative sub-regions, whereas MDG goals emphasise global standards and international comparability. For example, poverty headcount data are important as a baseline, but Ministries of Finance and Central Banks need monthly data on employment and inflation for day to day policy work which will have a major impact on long-term trends, whereas it is metrics of these long-term trends which are important for the MDGs. I suggest that the list of indicators should be designed with a view to directly increasing accountability. One way of assessing them and the data needs is not only their cost, but also the likelihood that they can be provided in a timely fashion, and in a form that makes the data useable for domestic policy making and that is digestible for media and civil society to further policy debates and accountability.

Bad data can also be generated. Increase in demand may be met by a supply of inferior data, particularly if demand overshoots supply and the data provision process is incentivised through a system of rewards and punishment. Unfortunately, these conditions were met in the previous MDG agenda.

As an example, administrative data on Kenyan education showed a steady increase in primary enrolment rates, with a big jump in 2003, when all primary school fees were abolished. Schools at this stage had an incentive to exaggerate numbers, since allocation of teachers and funding was based on this, but surveys from the Kenyan National Bureau of Statistics and the Demographic and Health Survey showed actual enrolment rates to be flat over the same period.

Estimating the potential cost of data for the post-2015 list is challenging, but a rule of thumb from the previous round suggests about $1.5 billion per target. If the list currently being discussed by the OWG turns out to be close to the final version, there will be about 169 targets, compared with 18 in the MDG agenda. In the costing of the 18 MDG targets it was assumed that you only needed a population census, a household budget survey every five years, a demographic health survey every five years and annual surveys to update on basic health, education and living standard metrics. The 169 new targets also include areas such as agriculture, industry and employment and some mention the industry share of GDP and the rate of economic growth. There are expensive and time-consuming data requirements and assuming a constant marginal cost may well be an understatement, given the need for capacity building in statistical offices.

If 18 targets from MDGs cost $27bn, 169 targets would roughly amount to $254bn for the 2015-2030 MDG round. That is a fairly big number. It is close to the annual global total spent on Official Development Assistance in any recent year. It thus seems that the emphasis on measurement in the post-2015 agenda needs a radical change. The real question is – if we are serious about actually measuring the targets – how much do we want to spend on data? At 169 targets, we are looking at spending about 12.5% of all the ODA in the period 2015-2030 on getting data; 90 times what Denmark spends on aid annually.

By way of comparison, Statistics Norway had a budget of about 0.2% of total government spending in 2013. If the international community is about as willing as the Norwegian government to spend on statistics, then the same proportion of the ODA budget would allow data collection for just five targets rather than the 169 proposed. Using a significant percentage of the aid budget on measurement is neither realistic nor desirable. It is absolutely certain that 169 targets would not be measured appropriately. It is very likely that success and failure in the Post-2015 Agenda will be measured with deficient and bad data unless the list of targets is radically shortened.

Conclusions

In the 1990-2015 MDG database there were more gaps than observations. The previous agenda suffered from a mismatch between ambition in monitoring and ability to measure. The post-2015 agenda might end up being much more expensive than the $27 billion estimated to have been necessary to do a proper measurement job on the current one.

What would have been the benefit if they had used an extra $27bn to get good data in the previous agenda? Would the benefits of revolutionizing the data supply for the post-2015 period outweigh an allocation of $254bn? I hazard a guess that the cost to benefit ratio is below one, and that therefore the data revolution as currently envisaged is a bad idea.