Various participatory systems for analysing public services by citizens have emerged over the years. One of the established ones is the Report Card which has been applied in a number of countries including India and the Philippines.
Report Cards are instruments to encourage public accountability. Modelled on a private sector practice of conducting client satisfaction surveys, report cards solicit user perceptions on the quality, efficiency, and adequacy of the various public services that are funded by tax-payers. Qualitative user opinions are aggregated to create a "score card" that rates the performance of service providers. The findings present a quantitative measure of overall satisfaction and perceived levels of corruption among an array of other indicators. By systematically gathering and disseminating public feedback, report cards can serve as a "surrogate for competition" for monopolies – usually government owned – that lack the incentive to be as responsive as private enterprises to their client’s needs. They are a useful medium through which citizens can credibly and collectively "signal" to agencies about their performance and pressure for change.
The Report Card is intended to examine the services provided by the local authority through a survey of the recipients or beneficiaries of these services and to rate them according to a scale that measures efficiency and value.
The larger purpose of the Report Card tool is to make use of the results of the survey to then improve the services provided and to further investigate the reasons why services have not been as well provided as might have been expected.
Linkage to Transparency
The Report Card is a way of ensuring transparency in the provision of public services. The survey involves citizens who are the intended beneficiaries of the services as well as the taxpayers. It is used to gauge citizens’ satisfaction, scrutinise public officials and expose their inability to adequately provide the services. More importantly, the survey is used to find a means for improving the provision of such services. The whole process serves to improve the quality of service through better accountability.
How it Works – The Key Elements
An effective Report Card initiative requires a skilled combination of four things:
understanding of the socio-political context of governance and the structure of public finance
technical competence to scientifically execute and analyse the survey
media and advocacy campaign to bring out the findings into the public domain, and
steps aimed at institutionalising the practice for iterative civic actions.
Generally a Report Card initiative goes through the following key stages:
Identification of Scope, Purpose and Actors. One of the aspects of initiating a Report Card is to determine the scope of the evaluation: a sector, industry, or unit of service provision. Criteria vary with contexts: agencies receiving the largest amounts of public funds, agencies that are most directly related to the poor, agencies that have sensitive mandates such as security and policing, agencies facing high volume of anecdotal complaints from users, etc.
Since administration of a report card initiative is a technical exercise, it is important to identify credible policy institutes or other NGO-type intermediaries within the city or outside, who can undertake the exercise. Respectability of the intermediary organization directly affects the credibility of the findings.
Another key element is the identification of the broad class of users from which the sample would be drawn. This will depend on what sector is being evaluated. Finally, from an implementation point of view, the audience is key. The general public and the media are obvious beneficiaries of the findings, but it is crucial to determine the various target groups at the beginning of the exercise.
Design of Questionnaires. Following the identification of stakeholders, focus group interactions with at least the two constituencies – the providers of service and its users – are necessary to provide inputs to the questionnaire design. Providers of service can indicate not only what they have been mandated to provide, but also areas where feedback from clients can improve their services. Similarly, users can provide initial impressions of the service, so that areas that deserve extensive probing can be catered for. After the questionnaire is designed, it will be necessary to pre-test it with similar groups before a full-scale launch.
While there is a trade-off between detail and time, a critical mass of information has to be collected to ensure credibility and usefulness. If questionnaires are exhaustive, mechanisms will have to be worked out to alleviate the burden and make the sessions mutually convenient to the enumerator and the respondent. A useful practice is to break the questionnaire into different modules. Each module can then be answered by more than one member of the family, or whatever the unit of analysis is.
Sampling. The need for a critical sample size has to be squared off with budgetary, time, and human resource constraints. While surveys are, by definition, different from censuses, the larger the sample size, the better it is usually. There is, however, no recommended sample size. In fact, making the sample more representative should be a more important consideration than plain expansion of numbers. After an appropriate aggregate sample size has been determined, allocations have to be made to appropriately carved geographic regions. The standard principle here is to use the multi-stage probability sampling technique with probability proportional to the size of population.
Finally, while households are usually the most convenient units of analysis, caveats about cultural mores and intra-household distribution of wealth and power need to be observed. Within sample households, sample respondents have to be chosen. Usually, the head of the family is approached for answers. If questionnaires are lengthy and broken into modules, he or she may assign other members to answers different modules. Men and women, for example, may be informed about some modules better than others. If selected sample households are not forthcoming, households with similar socio-economic, age, ethnic and gender characteristics should be chosen.
Execution of Survey. A cadre of survey personnel has to be selected and trained to conduct the exercise. Survey personnel or enumerators should not only be thoroughly informed about the basics and the purpose of the project, but also be skilled in questioning respondents with courtesy and patience. Like the pre-testing of questionnaires, the work of enumerators themselves has to be pre-tested, with preliminary feedback used to modify questionnaires or the tactic of questioning.(48)
In order to ensure that recording of household information is being done accurately, it is often useful to undertake spot monitoring of question sessions at random. If questionnaires were misinterpreted, or some answers found inconsistent, re-interviewing is required. Finally, upon completion of each interview, the enumerator should ideally go over the information collected and identify inconsistencies. After the record is deemed satisfactory, it is put into standardised data tables.
Data Analysis. This is the stage when all data is consolidated and analysed. Typically, respondents rate or give information on aspects of government services on a scale, for example, –5 to +5, or 1 to 7. These ratings of representative users on the various questions are then aggregated, averaged, and a satisfaction score expressed as a percentage. There are numerous caveats in this technique including, for instance, non-representativeness of the sample, or inter-sector non-comparability. The key point is to ensure that well-tested survey techniques that conform to international standards are employed and that data is subjected to standard error analysis and tests of significance.
Dissemination. The findings of the Report Card should aim at being constructively critical. It may be unhelpful if the goal is solely to embarrass or laud a service provider’s performance. This is why it is important to share the preliminary findings with the service provider concerned. An opportunity for the authorities to respond to some of the serious criticisms must be provided, and genuine grievances on their part, such as staffing or budgetary constraints should be fed back to the report to alter the tone of recommendations
The media is the biggest ally in report card initiatives. The findings should be launched in a high-profile press conference, and all-out efforts made to ensure that the coverage is wide. This can require preparation of press kits with small printable stories, media-friendly press releases, and translation of the main report into local languages. Making the findings widely known and available makes it difficult for the agency concerned to ignore the findings.
Following the publication of the report cards, interface between the users and the service providers ideally in a town-hall type setting is recommended. This not only allows the two parties to constructively engage in a dialogue based on evidence, but also puts pressure on the service providers to improve their performance for the next round. If more than one agency is being evaluated, these kinds of settings can foster a sense of healthy competition among service providers. A direct interaction between the two parties involved is also a way to ensure an operational link between information and action.
Finally, the new developments in information technology should increasingly be used to solve old problems of accountability. Through web-sites and discussion boards on the Internet, the reach of the findings of reports cards can not only be widened, but they can also solicit the engagement of literate and informed tax payers in solving public problems.
Institutionalisation. Report card initiatives, especially those that arrive as one-off experiments, will serve little long-term purpose unless implementation is followed through on a sustained basis. Institutionalisation is also important to exploit the usefulness of credible report cards in full by making them more than psychological pressure tools on service providers. Ideally, governments can use report cards for performance-based budgeting and link public opinion with public spending. How these efforts are to be institutionalised should thus be a concern warranting some thought right from the outset.
Institutionalisation of the initiative can take a variety of forms depending on country circumstances. Three common models that exist are: i) independent civil society organizations undertake the initiative (India); ii) service providers themselves seek client feedback directly (United Kingdom); and iii) an oversight agency undertakes the initiative (United States). If non-governmental groups are doing the exercise, a coalition approach that brings together technically versed research, advocacy and media organizations can be effective.
Successful use of the Report Card in Bangalore, India
The Filipino Report Card on Pro-Poor Services
Score Card Surveys through Committees of Concerned Citizens in Bangladesh
Further information and contacts
The Public Affairs Centre, 422, 80 Feet Road,
VI Block, Koramangala, Bangalore 560095, India.
Telefax: +91-80-5520246/5525452/5525453, 5533467/5537260
Transparency International Bangladesh,
Progress Tower (5th and 6th Floors), House No. 1,
Road No. 23, Gulshan-1, Dhaka 1212, Bangladesh.
Tel. & Fax: +880-2-988-4811, 882-6036
The World Bank,
Environment and Social Development Sector Unit,
1818 H Street, N.W. Washington, DC 20433 U.S.A.
Notes and references
- An important consideration during the execution of report card surveys must be the integrity of the system of data collection. If enumerators are paid per questionnaire submitted, it is possible that they may misuse the system. In order to offset the risk of skewed findings due to fraudulent questionnaires, payments must be de-linked from the number of interviews and surprise checks conducted on enumerators in the field.