The use of human development indicators reflects a welcome focus on outcomes, but there is a ‘missing middle’ measuring service delivery that needs strengthening and is fundamentally linked to the portfolio’s primary focus on strengthening institutional capacity. KPIs, defined during project preparation, tracked during implementation and used as a key input to end of project ratings are the main measurement instruments to monitor project results. On average, there were six outcome measures and nine intermediate outcome measures per SP+L project. The most frequent outcome measures were those related to human development outcomes, despite the fact that this was not as frequently found among PDOs. One possible explanation for this is that even though SP+L projects may focus their objectives on institutional development or service delivery, as is the case of many of the safety net and service delivery projects, in the end the concern is on making tangible improvements to people’s lives. Reaching vulnerable groups and improved quality of service delivery were also widely represented among outcome KPIs. The most frequent intermediate outcome indicators measured institutional development, underlining the more output orientation of these short-term measures. Service delivery and quality of services were also common intermediate outcome indicators.
The Technical Guidance Notes for Safety Nets, Labor Market Programs, Service Delivery and Social Funds developed as part of this “Results Readiness” work include a list of indicators and typical data sources used in SP+L projects.
Overall, the quality of KPIs at project design was strong. They are generally SMART (specific, measurable, attributable, realistic, and targeted) with two-thirds of outcome indicators clearly linked to PDOs. Stronger linkages are in the area of access to public services and human development outcomes resulting from greater utilization of basic services. KPIs measuring community empowerment, local and national government capacity building perform were less well specified.
However, there was a notable tendency to confuse output and outcome indicators. There was also a lack of specificity in measures (indicators like “improved quality” or “adequate capacity” are not well specified), the utilization of indicators that are not sensitive to program performance (indicators such as “percentage of beneficiaries satisfied with program implementation” or “level of satisfaction with payments”) and failure to adequately specify the target population (widespread use of generic terms like “among the poor”, or “vulnerable groups”).
Two-thirds of all KPIs were able to specify baseline data, but the identification of KPI data sources as well as clear responsibility for their collection was not adequately specified on average for more than half of KPIs.
KPIs at Design: Performance
Average total number of indicators per project
Share of indicators with baseline data
Share of indicators with targets established
Social Safety Net
Share of indicators with a clear link to PDO
Share of indicator with source of data adequately specified
Social Safety Net
However, KPIs are not regularly updated during implementation, undermining their effectiveness as project management tools. During implementation, there was a tendency to trim down the number of KPIs actually tracked, with 12 percent of outcome indicators and 19 percent of intermediate outcome indicators listed in PADs not being included in ISRs. Only 21 percent of projects consistently and regularly updated KPIs during implementation. 19 percent of projects substantially restructured their KPIs, with the highest incidence among social funds projects (54 percent) largely reflecting incorporation of standard IDA 14 indicators. Poor performance and lack of reporting of KPIs in ISRs during implementation is mainly due to: (a) inadequate selection of KPIs at project design, (b) unclear specification at project design of the source, frequency and responsibility for measurement and reporting of KPIs, (c) data collection delays, for example, delays in establishing MIS and procurement delays in contracting external consultancies to carry out surveys as well as delays in indicators derived from national and sectoral sources external to the project.
Institutional development indicators pose the greatest challenge in designing and reporting KPIs. The SP+L portfolio’s focus on institutional objectives calls for the development of good measures of performance in this area. The performance ratings for institutional development indicators were consistently lower, including SMART ratings, linkages with PDOs, existence of a baseline, and adequate specification of data sources. The main weaknesses are related to specificity and measurability. Activity statements, like “implement management information systems (MIS)”, are often used as indicators instead of measurable variables. A stronger example would be percent of beneficiaries registered in the MIS or percentage of local offices producing monthly reports using automated MIS data.
Recommendations to improve KPIs at project design include:
a) KPIs should be identified along the stages of the results chain: Output measures help track project implementation, explain outcomes and should be explicitly included and measured. Using the term “intermediate outcome” can be confusing - There needs to be a differentiation between outputs and short-term service delivery outcomes. There is a need for clarity and measurability around a “missing middle” in service delivery;
b) KPIs should include an appropriate balance of both output and outcome measures: Output indicators should rely on accessible, high frequency data (usually administrative data). Outcome indicators usually rely on less frequently available data such as population data, impact evaluation surveys, citizen scorecards, tracer studies or beneficiary assessments;
c) Before choosing indicators, it is good practice to perform an analysis of what data are available and what data can be collected given resources and feasibility constraints;
d) Need to determine at which level (individual, household, community, facility) indicators should be measured. Consider disaggregation by gender, income deciles, age, and geographical area;
e) Set baselines and target values at the design stage;
f) The quality of institutional development indicators needs to be strengthened;
g) Indicators used in impact evaluations require a framework that allows for determining net program effect (a counterfactual)
Example of Weaker and Stronger
• Increase in access to health centers
• Increased medical consultations in FSRDC built or rehabilitated health centers
• Increase in access to education
• Increased enrollment rate in basic education by 7 percent for boys and girls in the areas the SFD intervened centers
• Contribution to short-term employment and income generation
• 40-50 percent of microfinance savers/borrowers in poor participating communities confirming an improvement in their household living standard
• Program resource allocation favors poorer participating communes and villages
• Distribution of financing of subprojects consistent with regional targeting criteria as estimated by the QUIBB 2006 Survey
• Ensuring that communities have benefited from social mobilization and facilitation using highly participatory methodologies and that community
• Over 50 percent of participating households record increased levels of trust and cooperation among stakeholders at the community level
based organizations are well represented
The use of KPIs during implementation could be strengthened by:
a) Selectivity in identifying the mix of KPIs during project design so that they can be tracked;
b) Ensuring that KPIs in PADs are listed in the ISRs;
c) Identifying the frequencies at which KPIs are measured as part of the ISR in order to update indicators accordingly;
d) PDO ratings in ISRs should be well-substantiated by KPI reporting. In cases where KPIs were not consistently reported it is difficult to justify PDO ratings;
e) Simplifying data collection processes, notably by being clear on data requirements, setting realistic time frames, and specifying responsibilities for data collection, reporting and review.