Success depends in knowing what works.

Bill Gates

Effective monitoring and evaluation is often called the holy grail of the development sector. An ideological turf war is being fought between the altruistic intentions of the heart of the field worker and the neoliberal rationality of the funding organization executive, writing out the checks that drive the social good sector. “How much bang for the buck are we extracting?” is often the sentiment echoed in meetings between the donor, the nonprofit, and community representatives.

Before we get started on what really happens when reports meet reality, let me set the “baseline” (in NGO speak) of how monitoring and evaluation is portrayed in Multilateral Bank Literature, the intellectual home of the developmental sector. The following definition is from a World Bank document on results-based monitoring and evaluation:

Monitoring and evaluation (M&E) is a powerful public management tool that can be used to improve the way governments and organizations achieve results.

Kusek & Rist, 2004

The core elements of a standard monitoring and evaluation program are:

  • Formulate outcomes and goals (design stage)
  • Select outcome indicators to monitor (inputs stage)
  • Gather baseline information on the current situation (inputs stage)
  • Set specific targets to reach and dates for reaching them (implementation stage)
  • Regularly collect data to assess whether the targets are being met (outputs stage/process monitoring)
  • Analyze and report the results (outcomes stage)

In addition to the above elements, the following questions that are typically addressed through monitoring and evaluation program outcomes are given below:

  • Is the program effective in achieving its intended goals?
  • Can the results of the program be explained by some alternative process besides this program?
  • What change and how much change occurred at the program or beneficiary level that is attributable to the program?
  • What is the cost per unit of output achieved by the program?
  • Is the program an efficient use of resources to meet intended impacts as compared to alternative investments?

Going off this information, here are some of the most common monitoring and evaluation challenges.

1. Set culturally contextual indicators

Indicators are the technology of global governance. The indicators used for performance assessment should be reflective of the region’s culture, since a culture-centered approach to monitoring and evaluation programs can help in understanding the nuances which get lost and cause the impact evaluation to lose its edge. Without cultural understanding, the purpose of the endeavor is lost.

Example: Many indigenous cultures do not accept the way that Western medicine is delivered, such as in the Santali areas of Jharkhand, Odisha, and West Bengal. Men may not allow their wives or daughters to visit health centers. Thus, an indicator that measures “number of pregnant women who do not consume IFA tablets” might not represent the correct picture. It would be important to understand the causative factor behind women who do not consume IFA tablets during pregnancy.

In this not-so-unique context, the health care delivery programs have to construct the “indicator” to measure the program outcomes in a culturally contextual manner. Indicators are guided by an underlying ethos to measure a particular objective of the program.

The question to be asked: Are your indicators culturally contextual?

2. Get real about community involvement

Expectations should be set from the community rather than with a conventional top-down approach. The target community and beneficiaries should be aware of how the program will be benefiting them and how giving the right data will only help improve their conditions. The stakeholder register should not be a tick-box element in the project management office but a repository of genuine concerns and expectations of beneficiaries and funders.

Example: Sending an external field surveyor into an urban slum to collect details regarding income levels will not give a clear representation of data. In most cases, families would be tempted to lie in these cases — they might be under the impression that stating a higher monthly income level might make them eligible for a loan or stating a lower monthly income level might make them eligible for a subsidy. Building trust from the community will be incredibly important to ensure that your field staff collect accurate data.

The question to ask is: Do target groups have “ownership” of the program? Were they consulted on how it was designed? Are they involved in management, monitoring, and evaluation?

3. Be agile

The Gantt chart often collapses in the heat of “the field”. The operational limitations of the developmental intervention are severe, and being “agile” about the trajectory of project implementation is often the best way to keep up the morale of the project team.

Example: Data collected from Phase 1 of the project might reveal that the intervention is not leading to better outcomes. In these situations, it is important to restructure the indicators and questionnaire to accommodate changes in the program intervention.

The question to ask is: Do we have buffer capacity in terms of resources to accommodate shocks from the field?

It might be useful to leverage mobile technology for data collection and monitoring, which will allow for dynamic changes to the data collection questionnaires and metrics.

4. Use relevant methods to evaluate change

Usually a combination of data collection techniques (e.g. a mixed-methods approach) is suggested for evaluating an intervention’s social change. Mixed-method designs combine the attributes of quantitative methods (permitting unbiased generalizations to the total population, precise estimates of the distribution of sample characteristics and breakdown into target sub-groups, and testing for statistically significant differences between groups) with the characteristic nature of qualitative methods to describe in depth the lived experiences of individual participants, groups, or communities.

There are three main kinds of mixed-method design:

1. Sequential: The evaluation begins with quantitative data collection and analysis followed by qualitative data collection and analysis, or vice versa. Designs can also be classified according to whether the quantitative or qualitative components of the overall design are dominant.

Example: Here’s an example of a sequential mixed-method evaluation of the adoption of new seed varieties by different types of farmer. The evaluation begins with a quantitative survey to construct a typology of farmers. This is followed by qualitative data collection (observation and in-depth interviews) and the preparation of case studies.The analysis is conducted qualitatively. This would be classified as a sequential mixed-method design where the qualitative approach is dominant.

2. Parallel: The quantitative and qualitative components are conducted at the same time.

Example: Quantitative observation checklists of student behavior in classrooms might be applied at the same time as qualitative in-depth interviews are being conducted with teachers.

3. Multi-level: The evaluation is conducted on various levels at the same time.

Example: Think of a multi-level evaluation of the effects of a school feeding program on school enrollment and attendance. The evaluation is conducted at the level of the school district, the school, classrooms, teachers, students, and families. At each level, both quantitative and qualitative methods of data collection are used.

Multi-level designs are particularly useful for studying the delivery of public services such as education, health, and agricultural extension, where it is necessary to study both how the program operates at each level and also the interactions between different levels.

Qualitative methods help examine complex relationships and illustrate how programs and participants are affected by the socio-cultural and political context in which the program operates. Tailor the techniques to the sector or issue at hand and use technological platforms (such as an integrated field data management system or electronic data collection like Collect, our Android application for field surveys) to better track change. After all, as the old adage goes, “What one cannot measure, one cannot manage!”

The question to ask is: Are you utilizing the right mix of methods to gather data? Is your data reliable from a monitoring and evaluation context?

5. Design a credible monitoring and evaluation program

Keep in mind the Data Triangle — reliability, validity, and timeliness. Any monitoring and evaluation program has to evoke trust and credibility from the stakeholders in the program ecosystem — the beneficiaries, the funders, the program officers, and the field staff. Whether the monitoring and evaluation program uses process monitoring (checking whether process milestones are on track) or beneficiary monitoring (measuring impact on the recipient), the data gathered from a monitoring and evaluation program should not only meet funder requirements but also help the nonprofit make better decisions. After all, a monitoring and evaluation program is not a witch-hunt but an initiative to track progress and to learn from past lessons.

The question to ask is: Is your monitoring and evaluation program designed to help you track your progress and learn lessons from the past? Were all your stakeholders – beneficiaries, funders, program officers — involved while designing the monitoring and evaluation program?

A development program should not be run to benefit the nonprofit, but to empower the subaltern. That ethos is indeed the terra firma of the development sector, and the monitoring and evaluation process is a substantial contributor to that end.

Author

Monishankar works with Atlan to develop resources for nonprofits to better manage their Monitoring & Evaluation processes. He is a development researcher based in Singapore and has worked in the enviro-social consulting space in India, Oman and Singapore. A prolific writer, he has written for Huff Post India and Green Business Singapore and blogs at changethinker.com.

12 Comments

  1. Pingback: M&E : Five things you are doing wrong – India Development Review

  2. Thank you sir for such a nice information. Hope it will be useful for all of us.

  3. Ageru Kebede Reply

    Important material.it help keeping the program managers in track and context
    Thank you

  4. Sammy Musunga Reply

    Thanks for these good insights about common challenges encountered in M&E. I totally agree with the wrongs doing mentioned however i would like to add one thing that is always not considered in M&E either purposely or unintentionally, that making sure that your M&E is gender sensitive. Developmental project always brings diverse impacts based on gender differences. Monitoring and Evaluation from gender sensitive lens would help ensure the project implementation is aligned to address major gender issues and make the impacts of beneficial to all irrespective of gender differences.

Write A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.