Click here for search results

Performance Measures Topic Brief

 

Introduction

Practitioners and scholars interested in legal and judicial reform are becoming increasingly interested in finding better ways of assessing the performance of legal systems and the success of reform projects. The greater use and greater sophistication of performance indicators is generally a development to be welcomed. However, designing appropriate indicators entails a host of difficult conceptual and practical problems. This topic brief will provide a cursory overview of how performance measures might be used to aid both reform projects and the day-to-day management of public and private institutions and agencies. The brief will also identify some of the problems and pitfalls associated with these sorts of indicators.

Existing Performance Indicators

A number of developed countries have adopted schemes for measuring aspects of the performance of their own legal systems, and elements of these systems might prove useful for legal reform practitioners working in the developing world. For example, the United States Bureau of Justice Administration and National Center for State Courts have collaborated on a Trial Court Performance Standards project, which focuses on indicators to assess trial court "outputs" in five areas - access to justice, timeliness, fairness, independence and accountability, and public confidence (Cole 1993, U.S. Dept of Justice 1997). In the United Kingdom, reforms in the Legal Aid Bureau have been accompanied by the development of indicators used to measure the "quality" of the services provided by legal aid lawyers (Goriely 1994, Sherr et al. 1994). The Spanish Judicial Council has developed an approach to measure the overall workload of courts that considers the number of hours required to complete various judicial tasks. And Australia provides, in its annual Report on Government Services, extremely detailed information on various aspects of court administration, including administrative structure, number of cases, time and money expenditure per case, and accessibility of courts. This data is broken down by geographic region, type of court, and type of case.

These and other efforts to develop indicators for the legal system ought to be of interest to practitioners in the development field. However, existing models do not address many of the important issues facing developing countries. For example, reliable measures of corruption, the use of informal or traditional dispute-resolution systems, and rights-consciousness are often needed in developing country contexts, but these variables usually don't show up in the assessment systems developed in the richer countries. The types of court administration and service quality measures gathered in wealthy countries are also extremely costly and time-consuming to collect, and it may not be realistic to expect developing country governments or resource-constrained donors to gather such extensive data. The experience of developed countries may still be a useful guide, but there are no existing models of legal system evaluation that can be taken "off the rack" and used in developing countries.

The Uses of Performance Measures

There are two broad categories of performance indicators that might be useful for legal reform projects. The first are internal "housekeeping" indicators that have to do with how efficiently a program is managing its resources, whether it is meeting scheduling targets, and so forth. These sorts of indicators are of obvious importance to project managers, but they are not the focus of this topic brief. Instead, this brief discusses the use of "external" performance indicators to measure the performance of the legal system the reform project is supposed to benefit, both before and after outside intervention. These kinds of performance measures have at least three possible applications in the context of legal reform projects.

First, performance indicators may be useful for diagnosing problems, i.e. for ex ante evaluation of how a legal system is (or isn't) functioning. In order to figure out where to target scarce reform resources, it helps to the nature and magnitude of the most significant deficiencies. Performance measures that identify problem areas are therefore useful both for program design and for making reform efforts politically acceptable to governments or other institutions that may otherwise be reluctant to admit that there are problems with the legal system.

Second, performance indicators might be used to assess how well reform projects are working in practice, i.e. whether they are having their intended salutary effects. Again, such performance measures have both substantive and political uses. Substantively, valid and reliable data on how well reform projects are achieving their goals aids in the adaptation of existing projects and in the design of new ones. Politically, the combination of resource constraints and the perceived intangibility of the goals of legal reform makes it desirable to be able to demonstrate to skeptical donors and host governments that a reform program is having a demonstrable effect on some aspect of system performance.

Third, valid and reliable performance indicators can be used to construct better theories about the operation of the legal system, the relationship between the legal system and larger economic or social development goals, and the impacts of various kinds of intervention and reform. Good data is useful in generating and refining good theories, and theories of legal development, in turn, influence both the design of specific reform programs and overall reform strategy.

For these and other reasons, legal reformers are interested in developing effective measures of system performance. However, designing valid and reliable performance indicators for the legal system is especially difficult.

Problems with Performance Measures

Perhaps the biggest conceptual challenge in the design of performance indicators for the legal system is choosing what to measure. In legal reform, there is no clear "bottom line", analogous to profitability in the private sector, toward which efforts are ultimately directed. Indeed, as scholars of public administration have long stressed, the "ends" or "outputs" of government agencies in general are diffuse, hard to measure, and at times are even contradictory. (See DiIulio 1993, Wilson 1993). This is perhaps especially true in the case of the legal system, which ideally is supposed to provide, among other things, a predictable framework of rules for commercial and social interaction, an efficient, accessible, and just dispute-resolution mechanism, the preservation of public order, and the protection of individual rights - all at a publicly acceptable cost. While most people would agree with most of these goals in principle, they would prioritize different ones. And, many if not most of these ends are at least somewhat subjective. Who is to say what is "just", after all? What level of cost is "acceptable"? Because of the variety and subjectivity of goals, designing performance indicators to assess the "performance" of the legal system as a whole is both methodologically tricky and politically sensitive.

A related issue concerns the appropriate mix of objective quantitative performance indicators and qualitative evaluations. Quantitative indicators are appealing because they are relatively concrete and often more objective - that is, the value of the indicator is less sensitive to the identity of the observer doing the measurement. Quantitative measures may also make cross-country or inter-temporal comparison more feasible. However, the scarcity of good data and the inherent subjectivity of many aspects of legal system performance limit the areas where quantitative data are available or relevant. It may be possible to measure objectively case processing times or legal expenditures per case. It may even be possible to construct quantitative measures that capture things like commercial confidence in the legal system and expropriation risk - Clague et al. (1996), for example, analyze black market currency premiums and proportion of "contract-intensive" money to get at these variables. It is much harder to measure objectively how "just" or "fair" the legal system is, or whether it is "legitimate" in the eyes of the general population. Similarly, there is no ready means of quantifying overall respect for the "rule of law".

Because of these limitations, it is important not to rely too exclusively on objective quantitative performance measures. Doing so may lead to an unconscious bias in favor of certain system goals - those that are easily quantified - at the expense of other less tangible, but no less important, ends. In consideration of these concerns, there have been attempts to gather more subjective, qualitative data on the performance of the legal system. This type of qualitative data usually takes the form of expert evaluations or surveys. (The results are sometimes quantified in the form of index scores, but these rankings are still essentially qualitative in nature.) The pros and cons of this type of performance indicator are more or less the inverse of the pros and cons of objective quantitative indicators. On the one hand, qualitative evaluations allow for more nuanced assessments of performance and can be applied to aspects of the legal system that not amenable to quantitative measurement. On the other hand, the subjectivity of such indicators is a serious concern. Alleged "experts" may not actually have much first-hand knowledge of the aspect of the system they are called on to assess. Also, country experts may come to the data with their own vested interests and biases, which might color their evaluations. Indeed, there may be a trade-off between an expert's first-hand knowledge of a legal system and her potential bias. An additional concern is that the use of index scores to quantify subjective expert evaluations may create an illusion of objectivity and precision.

The problems with both quantitative and qualitative indicators probably cannot be overcome. Probably the best approach is to use a mix of indicators - the appropriate mix will depend on the specific project or research question - and to keep the limitations and pitfalls firmly in mind. Often it makes sense to try to use both subjective and objective means to measure the same thing. For example, when studying judicial efficiency, it makes sense to measure mean trial duration and to survey users and/or the general population for their perception of whether the courts are efficient in their handling of cases. We can have more confidence in findings that are supported by both quantitative measures and subjective evaluations; if there is a discrepancy between different measures, that is in itself an interesting finding and may lead to fruitful insights.

The next major problem is also related to the problem of deciding what to measure. Even if a reform project has a clearly defined broad goal, one still must decide what sort of output measures to use. One approach is to measure very narrowly defined project outputs that are clearly and directly related to project performance. To take a simple example, a judicial training program can count the number of judges who successfully complete the training course. (This kind of indicator is sometimes referred to as an "outcome" measure as opposed to an "outcome" measure. However, the distinction between "outputs" and "outcomes" is somewhat murky.) The problem with this sort of measure is that it doesn't capture the program's success at meeting its ultimate goals. But it is difficult to construct performance indicators that will allow observers to adequately assess the impact of the reform program. Often, the desired "outcome" of a reform program is a broad systemic improvement, such as giving the poor greater access to justice or fostering a more efficient credit market. However, these broad goals are affected by many social, cultural, economic, and political factors, and the marginal impact of a reform program - even a thoroughly successful reform program - may be hard to detect, no matter how accurate the indicators for the broad systemic goal. This problem is compounded by the fact that changes in the legal system, like educational or cultural changes, are likely to have long-term rather than short-term effects on the broader patterns of social and economic development. Again, a mix of indicators is probably the best way to handle this problem, but there is no magic formula that can determine the proper mix. Practitioners should consider these issues up front, to make sure, first, that their indicators are measuring the phenomenon they actually are interested in, and, second, that the indicators are such that one could realistically expect them to change noticeably in response to a reform program, in the time-frame specified.

An additional issue that must be considered is the trade-off between country specificity and inter-country comparability. The essence of the problem is that, in order to use performance indicators effectively, they must be used comparatively. Knowing how long it takes to conclude a bankruptcy proceeding, for instance, means little unless you have some notion of how long it should take. Establishing a baseline is often the conscious or unconscious result of comparison with other countries. Similarly, inter-country comparability is valuable because it facilitates the development of general theory. But, legal systems are often quite different and are therefore not directly comparable in many respects. Thus, raw numbers may be misleading. For example, attempts to compare the number of lawyers per capita in different countries often founder on the question of who ought to count as a "lawyer" (Galanter 1993). Of course, one can design indicators that are specific to a particular system and refuse to make comparisons with other countries, but such an approach is severely limiting. In designing performance indicators, scholars and practitioners must be sensitive to these issues and to define carefully those measures for which cross-country comparisons are meaningful and useful.

A final problem with performance indicators is more mundane, but no less important. Collecting data is costly and time-consuming. And, as a general rule, the better and more nuanced the data, and the more expensive it will be to gather. Cost issues are a particular problem in many developing countries, where reliable information about even basic aspects of the legal system may be unavailable. In many cases, it may make sense to invest a lot up front in gathering information, before proceeding with any intervention. In general, good, detailed information allows for the construction of more sophisticated theories of how the legal system operates, and theory in turn can suggest new and more cost-effective performance measures. Reform programs need to consider up front the proportion of resources that will be directed toward information-gathering, and ought to consider carefully how to extract the maximum amount of relevant information at minimum cost.

Conclusion

More and better performance indicators are desperately needed in the legal reform field. Legal and judicial reform is an extremely complicated endeavor, and it is hard to imagine that it could be successful without valid, reliable data on how the legal system works and how well reform programs are achieving their goals. However, if performance indicators are not designed carefully, they can do more harm than good. It is therefore important to be explicit up front about what various indicators are supposed to measure, to consider carefully the appropriate mix of qualitative and quantitative indicators, and to devise measures for which valid and reliable data can be gathered in a cost-effective manner.

References

American Bar Association "Guidelines for the Evaluation of Judicial Performance" [Paper prepared by the Special Committee on Evaluation of Judicial Performance, American Bar Association] (1985)

Australia Steering Committee for the Review of Commonwealth/State Service Provision, "Court Administration" Chapter 9 in Report on Government Services 2001 (Canberra: AusInfo, 1999) pp. 401-446

Bordonaro, Robert F. "Performance Measurement for Public Sector Management" [Paper prepared for the World Bank Poverty and Social Policy Department] (1996)

Center for Democracy and Governance "Handbook of Democracy and Governance Program Indicators" [unpublished draft] (Washington, D.C.: Management Systems International/USAID) (1998)

Clague, Christopher, Philip Keefer, Stephen Knack, and Mancur Olson "Property and Contract Rights in Autocracies and Democracies" Journal of Economic Growth 1:243-276 (1996)

Cole, George F. "Performance Measures for the Trial Courts, Prosecution, and Public Defense" pp. 87-108 in Performance Measures for the Criminal Justice System (U.S. Department of Justice, Bureau of Justice Statistics, 1993)

DiIulio, John J. "Measuring Performance When There Is No Bottom Line" pp. 143-156 in Performance Measures for the Criminal Justice System (U.S. Department of Justice, Bureau of Justice Statistics, 1993)

Galanter, Marc "News from Nowhere: The Debased Debate on Civil Justice" Denver University Law Review 71(1):77-113 (1993)

Goriely, Tamara "Debating the Quality of Legal Services: Differing Models of the Good Lawyer" Legal Profession 1(2):159-171 (1994)

Goriely, Tamara "Quality of Legal Services: The Need for Consumer Research" Consumer Policy Review 3(2):112-116 (1993)

Kapoor, Ilan "Indicators for Programming in Human Rights and Democratic Development: A Preliminary Study" [Paper prepared for the Political and Social Policies Division, Policy Branch, Canadian International Development Agency (CIDA)] (1996)

Knack, Steven and Nick Manning "A User's Guide to Operationally-Relevant Governance Indicators" [unpublished draft] (World Bank, PRMPS) (2000)

Malleson, Kate "Judicial Training and Performance Appraisal: The Problem of Judicial Independence" Modern Law Review 60(5):655-667 (1997)

Schacter, Mark "Means…Ends…Indicators: Performance Measurement in the Public Sector" (Ontario: Institute on Governance Policy Brief No. 3) (1999)

Sherr, Avrom, Richard Moorhead, and Alan Paterson "Assessing the Quality of Legal Work: Measuring Process" Legal Profession 1(2):135-158 (1994)

Spanish Judicial Council Measuring the Workload of Courts: The Spanish Approach [translation of Section 3.15.3(F) of the Spanish Judicial Council's 1999 Annual Report, prepared by the World Bank Legal Institutions Thematic Group. Publication information for the original report is: Consejo General de Poder Judicial, Memoria, Volumen 1, Madrid: 1999, pp. 257-275)] (1999)

U.S. Department of Justice, Bureau of Justice Assistance "Trial Court Performance Standards and Measurement System" (U.S. Department of Justice, Office of Justice Programs, Program Brief, July 1997)

Wilson, James Q. "The Problem of Defining Agency Success" pp. 157-165 in Performance Measures for the Criminal Justice System (U.S. Department of Justice, Bureau of Justice Statistics, 1993)

back to top




Permanent URL for this page: http://go.worldbank.org/DNZZUB3WB0