Skip to content

Localization Part 2: Implementing a Localized Evaluation & Learning Framework

Lisa Frantzen, Associate Director, Evaluation and Learning

Isha Varma

Isha Varma, Senior Consultant, Global Health Visions

“You have little power over what’s not yours” – Zimbabwean proverb 

As evaluators we’re often brought in to assess the success of programs that are implemented in vastly different cultural contexts. So, to ensure that our evaluation is relevant, effective, and useful – how do we integrate the individuals closest to and most affected by the social issue so that they are driving how an evaluation is conducted and deciding how to interpret success?

In this blog, we explore some ways we have worked to do that in our two different roles – one as an external evaluator working with funder and nonprofit clients, and the other as an internal evaluator leading a nonprofit’s evaluation strategy. In Part 1 of this blog series, we discussed the idea of “localization” in the evaluation design phase. This blog picks up from that conversation and assumes that an Evaluation and Learning Framework has already been built, and that you are now ready to engage in data collection. As you begin, here are some important things to keep in mind:

Continue to integrate local experts in method selection. To make sure our evaluation can gain insights into whether social change has occurred, it needs to be generated by and with the communities being served. In part 1 of this blog series, we talked about working with the community to co-create the theory of change and the indicators of success. This co-creation approach needs to continue in our selection of data collection methods and throughout the entire evaluation implementation phase. Local community-based experts (that may or may not self-identify as an “expert”) will optimize the evaluation process because of their intimacy with the pressing needs of their community and their understanding of how data and insights can be collected and shared with quality and integrity. These guides will help ensure data collection is appropriate and relevant and can help us think beyond our usual patterns.

When working with Princeton in Africa, TCC Group developed and refined data collection tools that could be used offline and featured open-ended questions so that program staff could work with local community members to determine what successful health services organizations looked like to them. In implementing an evaluation of the New Brunswick Healthy Housing Collaborative in New Jersey, we engaged community members from the largely immigrant neighborhoods who had received housing advocacy training. Through conversations held in Spanish, they voiced how they perceived the success of the program and where they wanted more support in the future. The tailoring of tools to be offline, translated, and in the interest of what the ultimate end-user imagines for their future was key to the success of accurately localized data because it engaged with the community members pragmatically.

Have honest discussions about evaluation constraints. Sometimes we don’t have the option to co-create and iterate on the evaluation design freely with the community. If this is the case, be honest about who or what the evaluation was prompted by and why. Instead of using language that can feel condescending (We’ve heard language used such as “this will be good for your community”), ask, “Given the limitations, how can we together best design this evaluation for the community?” Listen actively and be honest about your ability to iterate on the evaluation design as you collect data and learn together.

Test the data collection tools together. It’s important to test the questions and tools as we develop them, and working with our local experts gives us realistic feedback about their validity. It also continues to ensure the relevance of the tools to the community’s learning needs. Even if one is using a “verified indicator” or an indicator that has previously been formalized as appropriate for a given context, it’s not unusual to find that a context has changed quite a bit from the time it was verified to the time one might finally conduct the data collection. Therefore, we must remember to test our tools at various points in time and changes in context to ensure they are collecting the data that we are really interested in.

Case Study: Girls First Fund’s Approach to Integrating Community Voice

When working with youth and women in communities around the world, the Girls First Fund (GFF) used a three-step process to ensure questionnaires were most appropriate. First, locally contracted Program Advisors would have in-depth conversations with grantee partners to identify learning needs. Once the most pressing learning needs were prioritized, the programs teams and M&E staff would determine the most effective phrasing of questions to yield the most useful information. Our donor-Board had a fair share of feedback, but since the identified learning needs were presented in the voices of our grantee-partners, the learning questions held weight and sparked excitement for the growth-potential of impact because of the learning.

 

Second, the questions would again be verified and pilot tested through conversations between local Program Advisors and local partners working with youth and women, and the youth and women themselves. Partnering with TCC Group, we developed a clear theory of change and corresponding indicators that centered these questions and developed a plan to regularly hear from the communities about what was and was not working well through the programs.  None of these steps would have been useful if those in “HQ” — with formal authority — did not defer to those who own their own culture (see above quote). Without deference, those with authority are only silencing local actors and encouraging behaviors — like approving questions that are ineffective — that may ultimately promote disinformation and thereafter, provoke programming that meets the needs of those in power, instead of those who are disadvantaged.

 

After the first round of results came in, the findings were checked and verified with local girls, women, and the grantee partners in workshops. These discussions had the intention of raising expected and surprising findings and were an opportunity to interrogate any results that just did not feel right and gain deeper insights into the context in which the data was collected.

Listen and look for what happens in between the scripted questions. We’re taught as data collectors to carefully prepare our questions and to separate all dissimilarities (group by age, gender, social class, etc.) to increase comfortability within qualitative data collection settings. As practitioners, we know this is idealistic and that if we can’t adapt our plan, we will miss key sources of information. While we rely heavily on previously conducted research, this research is limited, and if previously conducted work is all we are primed to see then we have little chance of making discoveries and advancing narratives.

Case Study: Chatter in the Room, Learning from a Focus Group in Nepal

When working with Americares in a high-altitude remote village in Nepal, I (Isha) was charged with learning how women choose to give birth and manage their health needs. Having arranged a conversation with only about 10 21–40-year-olds, I was surprised to see about 30 participants join – grandmothers and adolescent girls as well as a few lingering and curious men on the outskirts. I asked my local staff if there was a respectful way to separate the originally planned group and was advised that this might create tension, so we went ahead.

 

This conversation turned out to be one of my most enlightening and bias-dispelling focus group discussions ever. I learned that women in this community talk to one another with encouragement when discussing access to clinical health care and how younger women learned from the grandmothers that homebirths are not safe; hearing about the near-death experiences they had and watching live-play-demonstrations of how to give birth while holding a rope tied to a ceiling through laughter. I also learned that young teenagers were open to asking about their own menstruation and excessive bleeding while on birth control in front of their aunts and mothers and the few lingering men around. I observed how the Caste system manifests – women of a lower caste participated, but only from a bench seated through a set of double doors outside.

 

This hyper-localized information was not described in my prep-research, nor was it planned as part of my questionnaire, because of how I had been taught to segment populations. If these cues had not manifested, we could have [mis]treated this community the same way as we did others (with the intention of promoting health-seeking behaviors at a health facility or not responding to waiting room needs for different castes). This experience was a reminder to be human and pay attention to what’s happening in the room before, in-between, and after the lines of my scripted questionnaire.

Be realistic about the time it takes for effective community integration and for outcomes achievement. Time and time again we’ve witnessed a push to implement evaluation on a timeline that is impractical and that sacrifices the quality of what we can learn from the evaluation. Those pushes are almost exclusively coming from audiences that are external to the context of where the work takes place. Timelines are set based on board meeting presentations, the publication of an annual report, or the budgeting cycle of the funder. They don’t consider realistic timelines for social change to occur. This risks excluding important perspectives and drawing false conclusions. Aren’t sure what a realistic timeline is? That’s why having your locally-based partners involved in the evaluation design is a critical step – one that sets up your evaluation for success.

Case Study: Co-creating timelines, Girls First Fund and TCC Group

In a project with Girls First Fund and TCC Group we used a sample of grantees to engage in a series of workshops. Using human-centered design methods, we worked to outline the time in which they have seen change happen as related to reducing prevalence of child marriage within their communities. Asking organizations a range of questions based on the intended norms-change, like “how long do you expect to be engaged with a woman to convince her that it’s ok to say no to sex” all the way to “how long will it take for a household to reach an equal division of household chores and childcare between partners”, gave us information to better comprehend and plan for data collection timelines, an externally commissioned longitudinal evaluation, and conversations with donors on the impact grantee partners expect to make. Once we had this information mapped to a timeline informed by multiple grantees across various geographies, explaining to donors that, change on X social norm will take Y amount of time, became much easier. Initially, it felt that the investment of time and resources to conduct multiple workshops may not have been worth it, but ultimately it solved for many potential problems in our interpretation of results, and it gave the power to our grantee-partners.

As evaluators, it is our responsibility to be the bridge between the data sources and the ultimate end-users of the data. This means having honest conversations that center the voices and learning needs of local communities deliberately with senior leadership within our organizations and our partners. It means building on the expertise of the community itself and thinking of all the stakeholders as part of a collective effort to create lasting social change.

Thank you for joining us in these reflections on how we can localize or center community voice in our data collection practices. We hope you will join us in sharing what you are learning about localized evaluation practices as well!

Isha Varma has over 13 years of experience with international development nonprofits, intermediary funds, social enterprises, and foundations. Her expertise in Monitoring, Evaluation, and Learning has led her to build organizational strategies that optimize impact for both operational performance and ultimate social outcomes. She grounds her practice in rights-based participatory methods, human-centered design, and systems thinking. Isha has an MPH in the Measurement and Evaluation of Global Health Systems and Development from Tulane University’s SPHTM and a B.A. in Psychology from UCLA. She has spent time working in East and West Africa, Southeast Asia, and the Dominican Republic. 

read localization series part 1 read localization series part 3

 

Stay Updated

Join our email list to stay updated with TCC Group’s practices, tools, and resources.