Click here for search results

The Nuts and Bolts of M&E Systems

The objective of the series is to increase knowledge about M&E systems through a regular series of papers focusing on the design, implementation and use of M&E information by governments and civil society based on best practice experiences.

Nuts&Bolts en español y portugúes aqui.

Go to Building Monitoring and Evaluation Systems

<

No. 18 — April 2012
The State Results-Based Management System of Minas Gerais

In 2003, the Brazilian state of Minas Gerais launched an ambitious plan to gradually focus the public administration on a results-based management system. The state now relies on performance information generated by monitoring and evaluation tools.

No. 17 — February 2012
Performance Management in U.S. State Governments

U.S. state governments have a great degree of flexibility in adopting different policy and management initiatives. The overall picture suggests more uniformity than innovation, however. This note reviews factors that can make performance management more successful, using examples from states that did take the next steps.

No. 16 — November 2011
Conducting Diagnoses of M&E Systems and Capacities

To strengthen monitoring and evaluation activities aimed at improving government policies and reforms, it’s imperative to analyze how such examinations are carried out and what results they have. This note provides a guide to some of the topics that should be considered when undertaking a diagnosis of monitoring and evaluation programs.

No. 15 — October 2011
Five Advances Making it Easier to Work on Results in Development

Malnutrition is a bigger problem in South Asia than in any other part of the world. Roughly 40 percent of malnourished children live in South Asia, a majority of them in India. This note focuses on operational results in development, highlighting how some existing and newly emerging tools can be used to help operations staff of a ministry or a develoment agency tackle malnutrition.

No. 14 — September 2011
The Mexican Government's M&E System

Fifteen years ago, Mexico had conducted a few scattered evaluations, but had not implemented systematic performance measurement. Political changes in the late 1990s generated an increased demand for transparency and accountability. These changes led to new legislation and institutions aimed at strengthening independent government oversight through several channels, including external evaluations, public access to information, and the creation of a supreme audit institution. Also in the late 1990s, Mexico implemented Oportunidades, an innovative conditional cash transfer program with a rigorous impact evaluation built into its operational design. The program’s evaluation component became a role model within the Mexican public administration.

No. 13 — August 2011
Chile’s Monitoring and Evaluation System, 1994–2010

The Chilean Management Control and Evaluation System (Sistema de Evaluación y Control de Gestión) is internationally regarded as a successful example of how to put into place a monitoring and evaluation (M&E) system. Chilean M&E tools are the product of both cross-national lesson-drawing, and national policy learning experiences. The main M&E tools are centrally coordinated by the Ministry of Finance’s Budget Office (Dirección de Presupuestos—DIPRES) and promote the use of M&E information in government decision-making processes, particularly those related to the budget.

No. 12 — July 2011
Defining and Using Performance Indicators and Targets in Government M&E Systems

Developing effective national monitoring and evaluation (M&E) systems and/or performance budgeting initiatives requires well-defined formulation and implementation strategies for setting up performance indicators. These strategies vary depending on a country’s priority for measuring results and on the scope and pace of its performance management reform objectives. Some countries have followed an incremental method for developing indicators, that is, progressively, at strategically selected programs/sectors (for example, Canada, the United Kingdom, and Colombia), while others have taken a comprehensive, “big bang” approach by defining indicators for all existing programs and sectors at once (for example, Mexico and the Republic of Korea). In both cases, countries need to continuously work on their indicators to improve their quality and thus ensure that indicators can meaningfully inform government processes.

No. 11 — June 2011
The Canadian Monitoring and Evaluation System

The Canadian government has a formalized evaluation policy, standards, and guidelines; and these have been modified on three occasions over the past three decades. Changes have usually come about because of a public sector reform initiative—such as the introduction of a results orientation to government management, a political issue that may have generated a demand for greater accountability and transparency in government, or a change in emphasis on where and how M&E information should be used in government. This chapter provides an overview of the Canadian M&E model, examining its defining elements and identifying key lessons learned.

No. 9 — April 2011
Combining Quantitative and Qualitative Methods for Program Monitoring and Evaluation: Why Are Mixed-Method Designs Best?

Despite significant methodological advances, much program evaluation and monitoring data are of limited utility because of an over-reliance on quantitative methods alone. While surveys provide generalizable findings on what outcomes or impacts have or have not occurred, qualitative methods are better able to identify the underlying explanations for these outcomes and impacts, and therefore enable more effective responses. Qualitative methods also inform survey design, identify social and institutional drivers and impacts that are hard to quantify, uncover unanticipated issues, and trace impact pathways. When used together, quantitative and qualitative approaches provide more coherent, reliable, and useful conclusions than do each on their own. This note identifies key elements of good mixed-method design and provides examples of these principles applied in several countries.

No. 8 — March 2011
The Australian Government’s M&E System

Countries from all over the world have shown an interest in Australia’s experience in creating a monitoring and evaluation (M&E) system that supports evidence-based decision making and performance-based budgeting. The Australian M&E system in existence from 1987–97 was generally considered to be one of the most successful and was driven by the federal Department of Finance (DoF). This note discusses the genesis, characteristics, and success of this particular system and briefly considers the Australian government’s approach to M&E after the system was abolished. The contrast between these two periods provides many valuable insights into success factors and challenges facing successful M&E systems, and into implementing evidence-based decision making more broadly.

No. 7 — February 2011
Use of Social Accountability Tools and Information Technologies in Monitoring and Evaluation

This note attempts to cover the basic concepts relating to the use of social accountability and information technology to monitor and evaluate public services and other governance processes that affect citizens. With the help of simple though practical examples that use these concepts, the note explains how to bring a qualitative change in monitoring and evaluation by making the whole process more citizen centered and outcome oriented. In turn, these practices can help improve the quality of service delivery. The note also covers a few country-specific initiatives from India to support the related arguments.

No. 6 — January 2011
The Design and Implementation of a Menu of Evaluations

Policy makers and program managers are faced every day with major decisions resulting from insufficient funding, ongoing complaints about service delivery, unmet needs among different population groups, and limited results on the ground. There is a menu of evaluation types implemented by developing and Organization for Economic Co-operation Development (OECD) countries to tackle a wide range of policy and program management issues, considering time, resources and capacity constraints. International experience highlights the importance of a gradual approach when introducing evaluation tools into country-level M&E systems. Different paths may work better for different countries depending on the main purpose of their M&E system, existing institutional capacity, the availability of funds, and external technical assistance.

No. 5 — December 2010
Key Steps in Designing and Implementing a Monitoring and Evaluation Process for Individual Country Service Agencies

This paper identifies key steps in designing and implementing a monitoring and evaluation (M&E) system for ministries and individual government agencies that provide services. These suggestions are intended to apply whether the ministry or agency is in health, education, social welfare, environmental protection, transportation, economic development, public safety, or any other sector. The system might have been ordered or requested by the president or prime minister’s office, by a minister, or by any agency head who wants to implement an M&E process. M&E development should focus on providing a process that will yield regular outcome data (in addition to data on the organization’s outputs) that can be used by the designing agency and upper-level officials for accountability and, particularly, for managing these organizations, thereby helping officials improve their accountability and services to their citizens.

No. 4 — November 2010
Reconstructing Baseline Data for Impact Evaluation and Results Measurement

Many international development agencies and some national governments base future budget planning and policy decisions on a systematic assessment of the projects and programs in which they have already invested. Results are assessed through Mid-Term Reviews (MTRs), Implementation Completion Reports (ICRs), or through more rigorous impact evaluations (IE), all of which require the collection of baseline data before the project or program begins. The baseline is compared with the MTR, ICR, or the post-test IE measurement to estimate changes in the indicators used to measure performance, outcomes, or impacts. However, it is often the case that a baseline study is not conducted, seriously limiting the possibility of producing a rigorous assessment of project outcomes and impacts. This note discusses the reasons why baseline studies are often not conducted, even when they are included in the project design and funds have been approved, and describes strategies that can be used to "reconstruct" baseline data at a later stage in the project or program cycle.

No. 3 — October 2010
M&E Systems and the Budget

Monitoring and evaluation (M&E) are means to multiple ends. Measuring government activities, constructing and tracking performance indicators across sectors and over time, evaluating programs -- these activities can be carried out and tied together with different objectives in mind. It would certainly be possible to use M&E purely as a way to improve transparency and accountability, by making more information on the workings and results of government programs available to the public. One can also focus M&E on managerial purposes, to reward performance inside ministries and agencies. But surely a crucial element of running an effective public sector would be missing if M&E were not used to inform the spending of public money. This briefing note will introduce the main issues surrounding M&E as a tool for budgeting -- a system usually referred to as performance budgeting -- to help policy makers make strategic decisions about their M&E systems by outlining different design choices and their respective advantages and pitfalls.

No. 2 — September 2010
Defining the Type of M&E System: Clients, Intended Uses, and Actual Utilization

This is the second note in a monthly series on government monitoring and evaluation (M&E) systems led by the PREM Poverty Reduction and Equity Group under the guidance of Jaime Saavedra, Gladys Lopez-Acevedo, and Keith Mackay, with contributions from several World Bank colleagues. The main purpose of this series is to synthesize existing knowledge about M&E systems and to document new knowledge on M&E systems that may not yet be well understood. The series targets World Bank, donor staff who are working to support client governments in strengthening their M&E systems, as well as government officials interested in learning about the uses and benefits of M&E and in adopting a more systematic approach toward M&E in their governments.

PREM Note 2

No. 1 — August 2010
Conceptual Framework for Monitoring and Evaluation

This note outlines the main ways in which M&E findings can be used throughout the policy cycle to improve the performance of government decision making and of government services and programs, including the use of M&E for evidence-based policy making, budgeting, management, and accountability. There are many different types of M&E tools and approaches, each with advantages and limitations. This note presents four examples of successful government systems for M&E -- in both developed and developing countries -- and discusses some of their hardearned lessons for building M&E systems. These lessons are evidence of what works and what doesn’t in the development and sustainment of successful M&E systems.

Go to Building Monitoring and Evaluation Systems


Last updated: 2012-05-22




Permanent URL for this page: http://go.worldbank.org/CC5UP7ABN0