Skip to content

2017 American Evaluation Association Annual International Conference

Wednesday, November 8

6:15-7:15pm: OL-ECB8: Evaluation Capacity Building Across Partnerships

Public media is very good at telling stories.  However, with the exception of large funded government programs, it struggles to tell its own story. Tradition would say that public media has had impact in the communities it serves, but until recently, beyond counting audience members, it has done a poor job of assessing and sharing its impact. TCC Group was engaged by a collaborative, including Wisconsin Public Television (WPT) and Radio (WPR), to support the development of shared evaluation capacity for the organizations.  In our presentation, we outline a process developed across several public media organizations for building that capacity, the organizational change resulting from that effort, involvement of other organizations, and evidence of impact of the increased capacity. We will share what has worked and where opportunities exist for improving a capacity building model for public media.


Presenters
  • Erika Lehmann, President, Smith & Lehmann Consulting, Inc. (Chair)
  • Charles Gasper, Senior Consultant, TCC Group
  • Kira Story, Grants Manager, Wisconsin Public Television

 

6:15-7:15pm: 1839: Rowing against the current: Advocacy evaluation in difficult political contexts

2017 has been a year of political uncertainty and dynamic change at the US federal level and in many states (as well as around the world, for many other reasons than the US election cycle). At the same time as the supportive political environment transitioned to an oppositional one for many advocates, public willingness to engage in advocacy also shifted, with increased advocacy action and willingness to participate in larger movements. Advocates cannot ignore these changes.  Neither can evaluators. We propose five dynamics that evaluators should consider in order to be good learning partners in difficult political contexts including changes in transparency; expectations for success; field capacity (current and needed); advocacy strategies; and evaluation methods. The panel will be structured as a facilitated discussion among four experts in advocacy evaluation, each speaking to experiences with one or more of the five dynamics.

Presenters
  • Jewlya Lynn, CEO, Spark Policy Institute (Chair)
  • Jared Raynor, Director, Evaluation, TCC Group

 

Thursday November 9

8:00-9:00am: 2819: Cultural Competence…Next Steps in Uncertain Times

As evaluators, we are often faced with answering the tough questions for our clients such as those around issues of race, inclusion, and diversity. But how often do we take the time to have reflective conversations about the role we play in the prevalence of these issues? In the current climate, evaluators may find themselves having to be both reflective and proactive when dealing with the complexities of how race, inclusion, and diversity play out in communities. Learning and incorporating Cultural Responsive Evaluation strategies can be a starting step to addressing the above issues, but we may need to go beyond that. This panel presentation calls on the various insights and experiences from evaluation professionals who embody cultural responsiveness in their work. They will offer strategies that evaluators can use and seek to further the conversation on what roles and responsibilities evaluators have in the broader dialogue at hand.

Presenters
  • Thana-Ashley Charles, Evaluation Consultant, TCC Group (Chair)
  • Dominica McBride, CEO, Become: Center for Community Engagement and Social Change
  • Katrina L Bledsoe, Principal Consultant, Bledsoe Consulting
  • Wanda D. Casillas, Consultant, Deloitte Consulting
  • Monique Liston, Consultant, Derute Consulting Cooperative
8:00-9:00am: 1688: Evaluating Advocacy in the Early Trump Years: What Evaluators and Funders Need to Know

Recent changes in the political climate at the national, state, and local levels have put pressure on organizations to engage in intense advocacy and community organizing and created new opportunities for evaluators to support these groups as they struggle to adapt. Advocates and community organizers are focusing more on collaboration, staging protests and marches, and defending or, for some, ending existing programs recently begun.  This session will provide information from the Center on Evaluation Innovation on how evaluators and funders can best help groups plan and assess their work when old theories of change and logic models may not work. The discussion will be informed by a recent TCC Group study that identified capacities funders need to effectively support advocacy groups and Alliance for Justice survey results on additional capacities nonprofits need to be effective in influencing public policy.

Presenters
  • Susan Hoechsetter, Senior Advisor, Foundation Advocacy and Evaluation – Alliance for Justice (Chair)
  • Jared Raynor, Director, Evaluation, TCC Group
  • Julia Coffman, Director, Center for Evaluation Innovation
2:15-3:00pm: 2132: Making the Move to a More Dynamic Grantmaking Strategy: How Does Evaluation Keep Up?

When grantmaking follows a static strategic plan with clear objectives, an internal evaluator’s job is relatively straightforward. Increasingly though, grantmakers are developing strategy that is dynamic and open to interpretation. In these cases, internal evaluators must be flexible, think on their feet, and encourage learning.

Johnson & Johnson’s corporate social responsibility grantmaking team recently took on a largescale strategic refresh. As the dynamic strategy evolves, J&J’s evaluation team has focused on being an active thought partner; introducing standardization where appropriate; and framing the right evaluation questions while supporting learning opportunities.


Presenters
  • Laura Hollod, Senior Manager, Monitoring & Evaluation – Johnson & Johnson Global Community Impact (Chair)
  • Lisa Frantzen, Senior Evaluation Consultant, TCC Group

 

3:15-4:15pm: 1142: Advancing Advocacy and Policy Change Evaluation Practice: Leveraging the Wisdom From the Field

In our recent book, Advocacy and Policy Change Evaluation: Theory and Practice, we describe and compare six evaluation cases that speak to the diversity of advocacy and policy change evaluations, including a range of evaluation designs, conventional and unique evaluation methods, and approaches to informing advocate and funder strategy. They were identified by 2014 Aspen/UCSF APC Advocacy and Policy Change Evaluation Survey respondents as being conducted in the past five years and containing an interesting methodology or significant lesson. A primary reason for developing the six cases of evaluation practice was to surface design models in a variety of advocacy and policy contexts. This panel is an opportunity to compare and contrast the six diverse advocacy and policy change initiatives and their evaluation designs and have a discussion with evaluators of these initiatives on lessons learned that can be applied to evaluation practice. The initiatives are: (1) Initiative to Promote Equitable and Sustainable Transportation (2008 – 2013) to support adoption of policies for equitable and sustainable transportation options; (2) Let Girls Lead program (2009 – present) to create a global movement of leaders and organizations advocating for adolescent girls’ rights; (3) GROW Campaign (2012 – present), a multi-national campaign to tackle food injustice and build a better food system that sustainably feeds a growing population. (4) International Lands Conservation Program (1999 – present) to conserve old-growth forests and extend wilderness areas; (5) Tribal Tobacco Education and Policy (TTEP) Initiative (2008-2013) provided resources and assistance to five tribal communities to pass or expand formal and informal smoke-free policies while increasing community awareness of secondhand smoke; and (6) Project Health Colorado (2011-2013), a public will building campaign that engaged individuals and organizations in a statewide discussion about health care and how it can be improved.


Presenters
  • Annette Lenore Gardner, Assistant Professor, University of California, San Francisco (Chair)
  • Claire Diana Brindis, DrPH, Director, University of California San Francisco
  • Edward William Wilson, Ph.D, Owner, Wilson Consulting
  • Sheryl A Scott, Public Health Consultant, Scott Consulting Partners
  • Carlisle Levine, President and CEO, BLE Solutions, LLC
  • Jared Raynor, Director, Evaluation, TCC Group
  • Jewlya Lynn, CEO, Spark Policy Institute

 

4:30-5:15pm: 1416: Program Design and Program Theory in Inter-Agency and Intra-Agency Collaboration

Many intervention programs require collaborations among independent agencies with different missions (inter-agency collaboration) or among different levels of agencies with similar missions (intra-agency collaborations). Principles of organizing collaborations among agencies and factors affecting them have been discussed extensively in the literature. However, the inherent challenges of implementing those principles has received much less scholarly attention. Working together toward shared goals is difficult at times, especially among agencies with different priorities. Resistance and tension are not uncommon. Problems often originate due to issues related to whether they could reach a consensual program theory and its role in the design and guidance of planning and implementation activities. Without addressing these issues, collaborative opportunities and activities tend to be ineffective. This panel will focus on these design problems, with appropriate illustrations, and discuss how evaluators could contribute to addressing these program problems and issues.   

Presenters
  • Huey T. Chen, Professor, Mercer University (Chair)
  • Charles Gasper, Senior Consultant, TCC Group
  • Aimee Nicole White, Owner, Principal Evaluator, Custom Evaluation Services
  • Jonathan Morell, Director of Evaluation, Syntec
5:15-6:00pm: TIGBM1: Advocacy and Policy Change TIG Business Meeting

Presenter
  • Jared Raynor, Director of Evaluation, TCC Group (Chair)

 

5:15-6:00pm: TIGBM43: Program Theory and Theory Driven TIG Business Meeting

Presenter
  • Charles Gasper, Director of Evaluation, TCC Group (Chair)

 

Friday, November 10

8:00-9:30am: 2723: Creating Evaluation-Driven Learning Environments: Application of Rapid Cycle Learning within National Leadership Development Programs

Striving towards health equity and a national Culture of Health, four Robert Wood Johnson Foundation (RWJF) programs are working to develop strong leaders across multiple disciplines to collaborate on innovative solutions to persistent health challenges. RWJF and its partners seek to implement these programs in such a way that they continually improve and respond to the needs of the leaders and their communities. Rapid Cycle Learning (RCL) is helping them do just that. Rather than settling for evaluations that provide feedback as program participants leave the program, these programs integrate iterative learning loops to understand what parts of the program are working, where they can improve, and how they should adapt their program for scaling and changing environments. In this panel, you’ll hear perspectives from RWJF, program leaders, and the evaluation consultant on how RCL has been incorporated within the evaluation of each of these four national leadership development programs.

Presenters
  • Lisa Frantzen, Senior Evaluation Consultant, TCC Group (Chair)
  • Denise E Herrera, Robert Wood Johnson Foundation
  • Gaurav Dave, MD, DrPH, MPH, Assistant Professor – UNC-Chapel Hill
  • Nora Marino, Research Coordinator – University of Minnesota
  • Lydia Isaac, Executive Director, Health Policy Research Scholars – George Washington University

 

11:00-11:45am: 2443: Strategy or Evaluation: I Hate These Blurred Lines!

As evaluative thinking is increasingly used in the nonprofit and philanthropic sectors, the line between evaluation and strategy is becoming increasingly blurrier. As stated by several before us, including perhaps most recently by Julia Coffman of the Center for Evaluation Innovation in her article, “Oh for the love of sticky notes! The changing role of evaluators who work with foundations,” evaluators are being asked to play more and more roles. Some of these roles are increasingly blending with what would be traditionally thought of as strategy. This roundtable will engage participants in a small-group exercise focused on walking through a few scenarios we have experienced that blurred the line between strategy and evaluation to allow participants to better understand if there is a bright line that can be drawn between evaluation and strategy, and to understand what role, if any, there is for evaluators in strategy development.  

Presenters
  • Deepti Sood, Evaluation Consultant, TCC Group (Session Facilitator)
  • Lisa Frantzen, Senior Evaluation Consultant, TCC Group
3:30-4:15pm: NPF3: Evaluation in Nonprofits and Foundations: Generating New Insights, Fostering Culture of Learning, and Organizational Change

The John S. and James L. Knight Foundation is dedicated to supporting informed and engaged communities through investments in four program areas: journalism, communities, arts, and technology. In 2016, the Foundation funded four regional or local newsrooms to transition from print to digital first. Knight engaged TCC Group to conduct a developmental evaluation of the initiative. This session shares the process and efforts for building a developmental evaluation, shifts that occur as a result of informational and support needs, as well as reflections as to how it could be done differently.  Both the evaluator’s and funder’s perspectives will be shared along with observations as to how the evaluation affected the initiative versus merely reflecting the efforts.

Presenters
  • Karen Jackson, Principle Evaluator (Chair)
  • Janine Quintero, Assistant Vice President of Evaluation and Research, Hathaway-Sycamores Child and Family Services
  • Charles Gasper, Senior Consultant, TCC Group
  • Luz Gomez, Director Research, Knight Foundation

 

5:30-6:15pm: 2139: What Are Policy & Decision-Makers Really Thinking?


Many policy and advocacy-focused evaluators struggle with accessing decision-makers for data collection. Decision-makers are often too busy or unwilling to engage in interviews about their issues. This leaves a black hole for advocacy evaluators who are trying to understand what policymaker views are, how they are influenced, and how evaluands have impacted them. In this session, we will not only share our experiences in collecting data from decision-makers but we will also feature at least two panelists representing the realm of decision-makers. The panel will have at least two guests—one focused on policy and one focused on media. While the presenters will provide some facilitated discussion, there will be plenty of time for Q&A for the group, allowing those involved in policy and advocacy evaluation to better understand how to target their own data collection materials and talk to their clients about what’s likely to make an impact.

Presenters
  • Deepti Sood, Evaluation Consultant, TCC Group (Chair)
  • Katherine Locke, Associate Director, TCC Group

 

Saturday, November 11

8:00-9:00am: PTTDE1: Engaging Alternative Theories to Support Program Theory Development


Many community-based, normative change programs are proven effective in improving knowledge, attitudes, agency, and services use.  Social change effects thought to be core to a program’s effect, though, are rarely defined or rigorously evaluated. Case studies from Senegal and Niger of social-change-for-health interventions highlight the process of experiential-based program change theory development and how theory has served to focus additional exploration of evidence gaps of mechanisms influencing social change. Participatory discussions allow incremental program improvements, maximizing realist evaluation focused on evidence-based theory building and learning. Staff are invigorated by ‘their’ theory and are excited to link existing data and evidence. While realist evaluation applications typically address large-scale programs with large services databases, it is very useful for smaller-scale, community-based efforts designed to be scaled—to build the nascent evidence base for community-driven health programming influencing normative shifts and continue its application in expanding program contexts to check theory validity.

Presenters
  • Charles Gasper, Senior Consultant, TCC Group (Chair)
  • Susan Igras, Senior Technical Advisor – Institute for Reproductive Health, Georgetown University
  • Lauren Michelle Berny, Evaluation Associate, Centerstone Research Institute
  • Lauren Wildschut, Lecturer, Stellenbosch University
Host

American Evaluation Association

Stay Updated

Join our email list to stay updated with TCC Group’s practices, tools, and resources.