Skip to content

Localization Part 3: Whose Story Is It- Practices for Using a Localization Approach to Share and Interpret Evaluation Findings

Crucial to using a localization approach (i.e., integrating the voices of those who are most affected by the social or environmental challenges being addressed) is a mindset and a belief that everyone who touches a program or intervention brings skills to the table that can contribute to a successful evaluation. We are all taking in information, making judgements about the value of something, and deciding what to do next. Our role as evaluators is to bring tools, processes, and systems into the mix in a way that guides decision making. As we support organizations and collaboratives in interpreting findings, our main role is to facilitate conversations on how to use the data to best impact social change.  

While ‘interpreting findings’ often feels like it comes at the end of an evaluation, good localization integrates various stakeholders and participants from the very start and continuously along the way, reflecting on questions such as: Are we asking the right questions? Are our landscape analysis sources relevant? What are the implications of this aspect of the evaluation on the community? This not only strengthens interpretation and use of findings down the road, but it builds real relationships that are important for sharing the recommendations, and doing something new in an organization, program, policy, or campaign.   

In parts 1 and 2 of this blog series, we discussed incorporating a localization approach in the evaluation design and data collection phases. Integrating the voices of those who are most affected by the challenges being addressed must continue as we analyze, interpret, and decide upon the uses of the data we have collected through our evaluation. In this third and final installment of this blog series, we share some of the techniques we’ve used to localize or center community voice when interpreting and using evaluation findings.   

Conduct mindset checks.  

When dealing with busy schedules and timelines on a project, it’s easy to forget some of your original intentions for your approach. We use a mindset check to help ensure we are pausing and refocusing on not just implementing our planned activities, but how we are implementing those activities.  A few questions we ask ourselves for a mindset check are:  

  • How have the people most affected been involved in interpreting findings across the evaluation?  
  • Are we sharing information in a way that is accessible and interpretable to those that aren’t trained as evaluators?  
  • Did we budget enough time for meetings? For dissemination?  
  • How are we as evaluators actively listening? What indicators are people giving us that show we are listening to them as they engage with the findings? 

Asking and answering these questions helps bring us back to the use of a localization approach and ensure that we are truly listening to those who interact with or are affected by the program or intervention.   

Get creative about how to bring people into an evaluation conversation.  

While not everyone may get excited about reading an evaluation report, there are many creative ways to bring people into an evaluation conversation. When working with the Healthier New Brunswick Collective Impact Initiative (HNB) in New Brunswick, New Jersey, we brought community members into a meeting to participate in a data party or data walk. We had wall-size charts pinned up with important findings from our evaluation, with two to three discussion questions next to each chart. We asked questions such as, “What do you notice about the level of progress being made in this health area?  Where have we made the most progress?  From your perspective, what are the reasons for that progress? Do any key programs or organizations seem to be missing? Where do the priorities need to be right now?”

This process allowed everyone to obtain an overview of progress in each of the community health areas, helped fill in contextual gaps to strengthen the evaluation, and modeled how the HNB initiative would use this data to think about where more resources and support were needed and/or where strategies may need to be de-prioritized. This community-integrated process informed the subsequent priorities for the city-wide initiative. 

Think about storytelling at multiple levels and with various audiences.   

As we go through the evaluation design and analysis processes, we ask ourselves, “Whose story are we telling? Who should hear the story? What are the most effective ways to share a particular story with different audiences?”  We recognize that there is not a one-size-fits-all approach to reporting on evaluation findings. 

In a recent evaluation of a capacity-building program for librarians from small and rural libraries, funded by the Institute of Museum and Library Services (IMLS), we recognized that to tell the full story, we needed to acknowledge multiple levels of outcomes and experience with the program. Our evaluation looked at how the program impacted the librarians who were directly participating in the program, the capacity increases of the libraries that participated in the program, and the impact of the program on the community in which the libraries were based.   

We considered outcomes at the individual, organizational, and community levels. To tell these stories, we not only tailored questions to understand the distinctions between the different levels, but we also used different reporting mechanisms that would resonate with different audiences with whom the findings could be shared. To tell the story of how librarians felt going through the program, we used a journey map which highlighted the changes in experience at different stages of the program. For library impact, we used case studies to showcase the specific types of projects and the Community of Practice in which each participating library was based. Finally, for the community impact, we used a series of change-over-time charts within our multi-year report.

Consider how to use evaluation findings to increase social impact beyond a single program or intervention.  

Often the main purpose of an evaluation is to feed findings back into the program so that it’s more effective, equitable and meaningful for the community. We try to go a step further with our clients by asking the following question: How might sharing the findings and stories outside your inner circle of stakeholders bolster your social impact? 

With one client who we were supporting in establishing an entirely new initiative with integrated grantmaking, capacity building, narrative change, and evaluation, we embedded this critical question into our making meaning sessions with a community advisory board. As we walked through themes from the formative evaluation, including audio stories from the community, we asked advisors to consider what stories could help bolster our overarching goal of reducing stigma and improving access to HIV prevention and care for Latinx gay, bisexual, and trans men.   

Through these facilitated conversations, we helped advisors identify a three-pronged strategy to increase visibility, empathy, and knowledge to combat stigma and connect people to care. The strategy included recommendations to:  

  1. create a visually appealing glossy report in Spanish and English including audio and photos that uplift the stories and community 
  2. host a live-streamed conversation about the findings with community organization grantees, advisors, and community members and  
  3. produce a bilingual Poesía Slam event for National Latinx HIV/AIDS Awareness Day that uplifts the art, stories, and findings hosted by queer Latinx artists 

This mix of dissemination tactics for the formative evaluation led to increases in website traffic for our client, more grantee applications that referenced the formative evaluation findings, and greater visibility of people and conversations to reduce stigma. At the same time, this approach created professional and economic opportunities for community leaders and emerging artists while fostering deeper relationships essential for long-term and sustainable impact.   

In conclusion, our practices for localizing or centering community in the interpretation and use of our evaluation findings have drawn on techniques such as conducting ongoing mindset checks, getting creative about how to bring people into an evaluation conversation, telling an outcomes story at multiple levels and with various audiences, and strategizing about sharing findings in ways that increase the overall social impact.  

Thank you for joining us in our reflections on how we can localize or center community voice in our evaluation design, data collection, and interpretation and use of our evaluation findings. We hope you have enjoyed this blog series. We are always looking to continue learning and would love to hear what your experiences have been in using a localization approach in evaluation. Please reach out to us here. 

read localization series part 1 read localization series part 2

 

Stay Updated

Join our email list to stay updated with TCC Group’s practices, tools, and resources.