Skip to content

Re-Shaping Evaluation in Today’s World

Lisa Frantzen, Associate Director, Evaluation and Learning

Deepti Sood, Associate Director, Evaluation and Learning

Charles Gasper, Senior Consultant, Evaluation and Learning

Stephanie Coker, Senior Consultant, Evaluation and Learning

We, as evaluators, have a responsibility to “to help transform the systems, policies, and practices that have created today’s challenges, and help build toward a more equitable, sustainable future.” This was the overarching message at last week’s American Evaluation Association (AEA) conference. The conference focused on three significant shifts that are currently influencing the social sector and our evaluation practices:

    1. Equity, social justice, and decolonization – asking ourselves how we can bring a deeper understanding of systemic oppression to our work and looking at our own positionality and where power may need to shift. 
    2. New actors and social finance – new forms of financing social change work are bringing new actors that are or could be commissioning evaluation. 
    3. Digital data and methodology – as technology continues to evolve, we must be aware of new data sources, data types, and data tools and how to use them responsibly. 

Our team was fortunate to have many opportunities to share what we’re doing and learning, alongside our clients.  From poster presentations to leading sessions, covering topics like advocacy, localization, and leadership development, we came away energized by the many ideas exchanged with our peers on these three themes.

So, in order to continue the learning cycle, as well as hold ourselves accountable to how we will use these learnings, we are sharing some of our reflections from the conference. 

1. Equity, social justice, and decolonization 

Lisa Frantzen:

In order to achieve social impact, local actors must be involved from the beginning. This means not waiting until the data collection phase and extracting data from them for our own uses. Those most affected by the issue being addressed need to be involved, even leading, the needs assessment and program design phases. As outsiders we cannot fully understand the contexts of all communities and therefore cannot completely understand what the solutions are nor how to interpret progress towards our desired outcomes. We need to question the current frame and ways of doing things. The chosen interventions and the lines of inquiry covered in the evaluation need to be community-centered. This is the only way we will get to impact. 

As evaluators, this can be hard to influence, particularly if we are not brought in during the program design phase. In our work with Girls First Fund (a donor collaborative supporting community-led initiatives to end child marriage) we built on a year-long community engagement process that they had undertaken in each of the countries in which they work to understand the specific barriers each community faced in supporting girls rights and ending child marriage and to define what relevant interventions would look like. Similarly, we built an evaluation framework that allowed flexibility for each of the communities to define what success looked like and which outcomes were realistic in the short vs. long-term. Involving local actors throughout the evaluation is something we have been thinking about a lot and are sharing our thoughts in this blog series

\

Slow down the timeline. We need to both recognize the urgency for addressing the problems while also keeping our expectations realistic for achieving community and systems-level changes. Social change takes time yet so many of our philanthropic processes are set up to report immediate results, end funding promptly, and rush the timelines for beginning the work. In order to do the work in a way that creates sustainable change, local actors must be involved (see above) and it takes time to build understanding and trust. 

In a project that we led with Presence Health (now AMITA Health) we mapped out a series of logic models to reflect their various approaches to health equity work across communities. Then, working with representatives from each community, we heat-mapped the logic models to more accurately reflect the interventions prioritized by each community and which outcomes and timeframes were relevant for their particular contexts. 

We need to stop thinking in terms of us vs. them and think about how we collectively bring together our different pieces of knowledge to create social change. As outsiders to a community, we need to think about how we bring our skills in service to a community’s agenda and re-conceptualize our definition of rigor in our evaluation methods. Achieving true social change requires us to center peoples’ lived experiences as indicators of change. We need to add to the importance we place on being rigorous in our evaluation methods the importance of effective community engagement

2. New actors and social finance 

Deepti Sood: 

The ability to meet people where they are remains crucial. Several sessions and talks focused on social finance  – financial sources that are focused on advancing social impact – mentioned how mission measurement in these spaces is very standardized, due to the history of how financial measurement happens. Evaluators moving into this space will need to explain why mixed-methods and customized evaluation approaches have some benefit towards overall impact measurement. Our team will be putting these skills into use this year as we work with a Southern foundation to evaluate a capital-access program and unpack what impact greater capital can have on women and entrepreneurs of color. While they have analyzed financial data using quantitative methods in the past, they haven’t yet engaged in any more qualitative or narrative measures of change, so our work will aim to lift up both in our approach to evaluating impact.  

New questions will emerge with this scaling of evaluation. Dollar figures in the trillions were mentioned as a representation of the field of social finance – a huge arena that can literally impact the world and where opportunities abound for evaluators to bring rigor, values, accountability, and learning. After the social finance sessions I attended my brain was swimming with questions. How can evaluators move the field towards decolonizing evaluation and moving into capital markets? Where will evaluators fall on impact measurement related to problematic actors? How will we stay true to our own personal, organizational, and field values and also make space for building trust and relationships with new actors? How will we balance creating buy-in from these new actors with accountability? I’m looking forward to further refining the questions and starting to seek out some answers. 

Charles Gasper:

The concept of an evaluator as a critical friend is changing. New actors are coming into the world of social sector programming and evaluation, including groups such as data scientists and social financiers. Additionally, the predominant methods of evaluation and knowing are being questioned. While we have been “here” as evaluators before, things are a bit different now. As a young evaluator in the 1990’s, I recall the conflict of quantitative versus qualitative methods – both systems of knowing, but also both camps indicating that the other’s methods were wrong. Through extremely lively negotiation, mixed-methods evaluation (using both quantitative and qualitative methods) became the standard for what was considered good evaluation. Similarly, as that young evaluator, I encountered the challenge of misuse of methods, seeing firsthand how misunderstanding of and subsequent misuse of a quantitative analytic method resulted in misinformation. These challenges persist with the increase in participation and contribution of new actors, who bring their own biases, their own methods, and their own needs to the table. s a result, our role as evaluators is evolving. We no longer own the space of methods. We no longer are the gatekeepers of data that we were. Instead, we are transitioning more and more to the role of advisor, ethicist, and mentor – working to facilitate the process of evaluation. We are being asked to critically review the evaluative questions, to the point of asking if the question is appropriate. We are being asked to deeply consider who is involved.

In our work with Camp Arrowhead, we were tasked with creating a theory of change for the organization. Traditionally, we have worked with a small group, often consisting of board and staff members and occasionally, where possible, a few program participants. Instead, we used a method by which all previous program participants and their parents, vendors, current and past staff, and board members were offered the opportunity to voice their thoughts around the meaningful activities and outcomes of the Camp. Our subsequent modeling of the programming then reflected all of their thoughts. This in turn provided the client with outcomes they otherwise would not have considered, enabling them to both better articulate the program’s focus on and impact on outcomes that are meaningful to participants.

3. Digital data and methodology

Stephanie Coker:

Responsible data governance is key to equitable and culturally responsive evaluation practice. Data privacy and security are topics that evaluators come across fairly often. These topics are generally approached from a technical perspective and conversations about data security can sometimes lack a connection to the human experience. One striking takeaway from a presentation on “Trends in African MERL Tech:Insights from a Landscape Scan” was about how informed consent (to share data) is also in line with African feminist thinking about ethics of data. This presentation was also markedly reminiscent of another session on “A Trauma-Informed Approach to International Research and Evaluation” because of its focus on ‘do no harm’ with research and evaluation. It was heartbreaking to hear about the recent missteps in data governance during U.S. military withdrawal from Afghanistan that have shaped the discourse on this topic, bringing into sharp focus how data can have real-life negative consequences. Another learning was on the need for evaluators to have more training on data regulations and work more deliberately with legal and compliance experts to safeguard data.  These takeaways align with ongoing conversations in open data and open source technology, and reflect an important shift in evaluation practice around the world.

Evidence democratization for solving complex problems is getting easier. The next generation of tools for evidence aggregation have arrived and are, surprisingly, fairly accessible to non-technical audiences. While thousands of research studies are conducted every year to determine the impact of program/intervention activities on outcomes, far fewer studies are dedicated to aggregating data for actual use by those working on the front-line to create innovative approaches for tackling societal dilemmas. Systematic review is one method for aggregating data but is often so complex and time-consuming that few social sector practitioners ever get the opportunity to practice what would otherwise be a critical step for creating new programs.Many systematic reviews also typically feature overall program effects without identifying which particular combination of strategies led to a positive outcome.  A key feature of an AEA presentation on  “Advances in Systematic Evidence Review Methods and their Implications for Democratizing Evidence” was an introduction to new tools for review that would help social sector professionals ‘become smarter – faster’. The session introduced Core Components Analysis methodology which focuses on specific aspects of interventions that comprise a program and are more applicable for practitioners seeking to improve existing programs, rather than starting from scratch. Expert panels are also an often-used method for reviewing evidence and making recommendations for practitioners based on experience. A more culturally responsive application of this method would be involving community elders or traditional knowledge holders to be part of expert panels, acknowledging that intellectual expertise comes in many forms.  These newer approaches showcase responsiveness to a general demand for actionable and inclusive practices in evaluation.

Our team is still processing and reflecting on how we can more consistently apply these concepts in the work that we do with nonprofits, foundations, companies, and government agencies. We commit to continuing to hold the bar high in our own accountability to living our values and learning alongside the partners and communities in which we work. We hope that you find some of our reflections useful and we welcome your thoughts on how we can collectively address our changing times and create the just and equitable world in which we want to live. 

 

Stay Updated

Join our email list to stay updated with TCC Group’s practices, tools, and resources.