For NGOs with offices in a wide variety of geographic locations, establishing a Monitoring and Evaluation (M&E) system that works for both the headquarters office and field offices can be a real challenge. If those offices operate in different countries, that challenge becomes even harder.
As a former program manager for Latin American health programs, I appreciate that needs and outcomes differ by region, by country, and even sometimes by community. I remember long conversations with my colleagues about how a solution for one geographical area would never get off the ground in another! So, how can we create a measurement framework that serves both field office and headquarters’ needs?
One way is by developing a standard process to map out the expected country-specific outcomes (i.e., develop a logic model) and by creating a blend of both common and customized indicators for the specific country/region. Such a process of organizational feedback at multiple levels helps build an organization’s adaptive capacity  – i.e., its ability to monitor, assess, and respond to changes in its environment.
An example of this can be seen through TCC Group’s recent work with Hand in Hand International (HiH), supported through Johnson & Johnson’s Healthy Futures evaluation capacity-building program. HiH works to fight poverty through grassroots entrepreneurship and operates out of six country offices across Africa and Asia. These offices need to understand the progress of their local efforts with women’s groups (which help foster the entrepreneurship), and to report these progress indicators to HIH’s headquarters office in London. The London office then needs to be able to summarize the progress of the work with women’s groups across all of the countries in which it operates. But, in order to understand what data can and cannot be aggregated, HiH needs to understand whether its women’s groups interact in the same way in Eastern Africa as in Afghanistan. And, can the same outcomes be expected for each office?
To serve these diverse needs, we piloted an evaluation framework development process (see Figure 1) with two of the HiH offices. The Eastern Africa office served as the primary pilot and worked with us to create the initial framework while the Afghanistan office shadowed the process in order to help them replicate the process for their own office.
Our team first mapped out the Eastern Africa office’s desired outcomes for their region by creating a draft logic model based upon a review of Eastern Africa’s strategic plan and other documents, and discussions with HiH staff. We then held a working session with the Eastern Africa, Afghanistan, and London teams on the draft logic model. While this specific model was for Eastern Africa, the Afghanistan team played a critical role – asking questions to clarify meaning within the Eastern Africa context and comparing the outcomes pathways in their own country. The discussions about differences in outcomes were also critical for the London M&E manager, who is responsible for understanding these differences and making decisions about when to aggregate data across offices.
Once the teams felt that a strong logic model was in place, we developed an evaluation question and evidence matrix. At TCC Group we use these matrices to begin to operationalize the logic model. Here we mapped out the evaluation questions that are most important to be answered, the indicators/measures that will help us answer those questions, and the sources of data that can be used. As a part of this mapping process, we re-convened with the Eastern Africa, Afghanistan, and London teams. Together we prioritized the most important indicators for Eastern Africa to collect based on importance, feasibility, and use. Here’s where the conversations and questions got interesting!
What are the differences between the women’s groups in Eastern Africa and Afghanistan? Would health information be shared between women in these groups? Would HiH staff be able to obtain this information from group leaders? These conversations again led to important understandings between the teams about how each region can address its’ own needs while contributing to the organization-wide goal of connecting people to employment through entrepreneurship.
The pilot process concluded with the development of data collection tools that would enable collection of the newly developed indicators. This again required the teams to listen to each other’s context-specific factors and to find a balance between local vs. headquarters needs. HiH now has an established process for developing evaluation frameworks which it will replicate with the remainder of its offices. Logic models will be country-specific using a similar format, and indicators for each country will be a blend of core indicators which will be aggregated by the London office and customized indicators particular to that country/region. And, most importantly, HiH now has an M&E system in place that allows it to have a greater understanding of what is needed to achieve their desired impact and how that varies by region.
For more information see: http://www.handinhandinternational.org/hand-in-hand-updates-me-with-help-from-johnson-johnson/
 For more information on how this applies to humanitarian aid organizations, see TCC Group’s briefing paper: http://bit.ly/2mNflUW
Photo credit on TCC homepage: Sisters of Faith | Business training session | Mumandu, Kenya from http://www.handinhandinternational.org/approach/