Managing for and communicating results are essential elements of good public management, especially when it comes to aid. Public pressure is mounting to deliver on this agenda, showing clear and measurable results. Operationalizing such results agendas is challenging; rarely is it ideal.
There are practical constraints to work around, tradeoffs to be made. However, with sufficient will and flexibility, much can be achieved. Indeed, much has already been achieved.
The African Development Bank reports annually on its development contribution across the different levels. This is not without challenges. Measuring and reporting on results face a number of practical constraints regarding e.g. selectivity, aggregation, data quality, methodology, attribution and portfolio size.
Efforts are underway to improve results measurement with better and more reliable data. The various development organizations share information and experiences to learn from each other to improve, and to harmonize results reporting further, which is of particular benefit to donors and the public. The work is ongoing—and unfinished.
Managing for and communicating results are essential elements of good public management, especially when it comes to aid. Public pressure is mounting to deliver on this agenda, showing clear and measurable results. Operationalizing such results agendas is challenging; rarely is it ideal. There are practical constraints to work around, tradeoffs to be made. However, with sufficient will and flexibility, much can be achieved. Indeed, much has already been achieved.
The multilateral development agencies are converging towards a four-tier results framework (Asian Development Bank 2008, African Development Bank 2010, and World Bank 2011). This is a more refined conceptual framework whose aim is to encourage people within and outside the institution to think along the development-results chain and report on the different levels.
The African Development Bank is using such a four-tier framework to monitor and report on results. The Annual Development Effectiveness Review reports on the Bank’s contribution to the continent’s development across the different levels (see African Development Bank 2011). This is not without challenges. Measuring, aggregating, and reporting on results face a number of practical constraints:
Selectivity―Reporting on only part of achieved results:
Reporting on results requires a tradeoff between completeness and accessibility. Hundreds of different indicators would be needed to report all results, and these would overwhelm the target audience and add little to better monitoring or reporting. Such specificity makes sense at project level.To convey what has been achieved at an aggregate level (organization-wide, sector-wide, etc.), a smaller selection of indicators that reports only on key outcomes is needed. Reporting on all results is especially problematic when it comes to cross-cutting themes such as gender or climate change. This would require disaggregating many existing indicators and adding numerous new ones with the risk of over-expanding the results frameworks.This tension between wanting to report on the range of activities with sufficient detail, and the reasonable number of indicators remains. Reporting on an aggregate level will remain selective. Not all results are captured which may sometimes distort the picture of what has been achieved.
Aggregation―Counting apples and oranges:
The more selective the indicators the more of what has been achieved is not reported on. To still provide a fairly comprehensive picture, the indicators used tend to be broad, such as, ‘Roads constructed, rehabilitated, or maintained’, or ‘Primary, secondary and tertiary health centers constructed/equipped’. This often means mixing apples and oranges.Feeder-road rehabilitation for one year, which often means nothing more than a few workers fixing potholes, is not the same as constructing a solid four-lane motorway to last for the next 20 years. Although the qualitative aspects within indicators can vary widely, they are aggregated for corporate reporting purposes.It would not be very useful providing twenty different indicators by varying road quality (one for motorways, one for dual carriageways, one for all season roads, etc.) Such details are used at a project level. For corporate reporting more aggregate indicators are needed.
Data quality―Problems of availability and accuracy:
Reporting on results relies on the data provided; much data is simply not available. Many indicators, especially at Level 1, but also project outcomes (Level 2) are based on data gathered through household surveys, such as population data, household income, and access to services, conducted by the country every few years, sometimes with five- to ten-year gaps.Robust samples can help, but they are expensive and not cost-effective. Relying on old data, and sometimes waiting for new data after the project has closed, makes consistent and continuous reporting difficult. Efforts are now underway to improve the quality and availability of data, helping to build statistical capacity within the countries (e.g. PARIS 21) but also focusing much more on available data within project results frameworks.
Methodology―How to measure beneficiaries:
Even where data is available, a more fundamental question arises on the methodology of measuring outcomes. Kilometers of roads or numbers of vaccinations are relatively easy to quantify; but who or what is a beneficiary?Does increasing access to health facilities benefit those who actually go? Or those who live close by and could use the facilities?Or is a beneficiary anyone who benefits from a reduction in the incidence of disease, reduced employee absenteeism for health reasons, or even those who benefit indirectly because those less ill have a higher income to spend?The methodology may differ according to the nature of the project but also by the approach taken. Methodologies for generating data are being developed to establish a consistent way of measuring beneficiaries.
Attributing outcomes to projects is always challenging. This attribution gap is particularly apparent in budget support or knowledge activities. It is hard to construct any meaningful outcome indicators for knowledge work or, in many cases, budget support beyond a very rudimentary output level.This also means that many outcomes the organization surely has contributed to are not captured and overall results are distorted. To address this, several pilot studies are being conducted to try out ways to assess the impact of selected budget-support operations.
Depending on the number of operations, data may vary widely. In a small portfolio, one or two big projects can drive the data, causing huge fluctuations in outcomes.This is often misinterpreted as a fall or surge when it is simply a reflection of a country-driven shift in priorities (shifting from a focus on transport to energy) or changes in the nature of projects (moving from a feeder-road project to a motorway project may drastically change the number of kilometers of road built or rehabilitated, for example).This also means the more the data is disaggregated, for example to a regional or country level, the higher the fluctuations, and the more the analysis needs to be interpreted with caution. Using narrow and specific indicators, means the fluctuations will be even larger. Broader indicators can help make this data more balanced.
These are just some of the practical challenges of implementing the results agenda. Any organization is faced with them when trying to report on results. They also have important implications for the interpretation and use of results data.
Given the scope of the results agenda and the different uses for monitoring and reporting, it is not always easy to interpret the data. Keeping in mind the various challenges underlying such a framework can help to better appreciate the difficulties, but also take into account the limitations, of results reporting.
Efforts are underway to address and hopefully overcome some of these difficulties and improve results measurement to provide better, more, and more reliable data. The various development organizations share information and experiences to learn from each other to improve, and to harmonize results reporting further, which is of particular benefit to donors and the public. The work is ongoing—and unfinished.
African Development Bank (2010): Bank Group Results Measurement Framework
Annual Development Effectiveness Review 2011