There’s a longstanding tension about the role of evaluation in foundations.
While there’s been a great deal of discussion about the need to document the results of foundation grantmaking, there is actually not a great deal of evidence that evaluation is being used in meaningful ways.
For example, in a recent report by the Center for Evaluation Innovation, a majority of foundation evaluation leaders, “reported that senior management engagement with evaluation was poor or fair in supporting adequate investment in the evaluation capacity of grantees (67%) and in modeling the use of information resulting from evaluation work in decision making (57%).” This research also identified having evaluations that resulted in meaningful insights for the foundation as the biggest challenge with evaluation – and producing useful information for the field or for grantees were the next biggest challenges.
Foundations typically contract with external evaluators to actually carry out the evaluation work. Over the past two years, a group of evaluation consultants and foundation staff who lead evaluations have been meeting to identify ways to develop a more diverse cadre of evaluators to work with foundations. The Funders and Evaluators Affinity Network (FEAN) has noted that there is a general lack of understanding among evaluators about how to work with foundations.
We have foundation leadership who are not using data, a lack of useful evaluations, and poor understanding of how to work with foundations among evaluation consultants. Why is it so hard to get high quality, useful evaluations in philanthropy? There are several underlying issues.
There may be a lack of clarity about what is being evaluated. In most evaluations, the evaluator is evaluating a program. They are seeking to either improve the program by providing formative feedback on how it is being implemented or to provide a summative assessment of the program’s outcomes – Was the program successful?
When approaching evaluation from a foundation lens, however, the evaluation may be trying to address multiple questions that focus on different aspects of the grantmaking. Questions that might be addressed include:
- the overall impact of the foundation,
- the alignment of the grantmaking to the strategy,
- the impact of a group of grants on a field of work,
- the impact of a set of related grants on a particular geography, or
- the impact of different types of grants (e.g., general operating support, single- or multi-year grants, capacity building grants) on grantees’ effectiveness.
Addressing any one of these questions is a significant challenge, but it becomes nearly impossible to produce a useful evaluation if the question of what is being evaluated isn’t clarified up front.
Evaluation is a tool. Evaluation is one of the tools that foundations can deploy to help in achieving their strategic goals. It should provide data that informs strategy management by testing assumptions and monitoring the context.
Too often, evaluation gets treated as something that stands apart from the work and offers judgment. When used in this way, evaluation is inevitably going to be seen as late and irrelevant. The Equitable Evaluation Initiative (EEI) provides an example of evaluation used as a tool to support equity. Rather than being used only to assess whether there has been disparate impact or whether the “right” people were reached by an intervention, EEI has developed a framework for how the evaluation process itself can be conceptualized as a tool to deploy in working towards equity.
Foundations are organizations – and they have organizational politics. Sometimes the strategy and grantmaking that are actually implemented are the result of a compromise between groups within the organization that had competing priorities. Sometimes they are a result of a donor’s or board member’s passions or beliefs about what will make a difference. Choices about what particular activities can be funded are made based on the funds that are available, not necessarily what will be the most impactful. Foundation boards have more influence over the day-to-day work of the staff than most organization boards if they are the final decision maker on every grant. And, board members may be far from the work and not have realistic expectations about what can be accomplished. Proposals and strategy documents are poetry, but the evaluator has to work in the prose what is actually happening.
Foundations have multiple staff who play roles in evaluation. Programs are typically overseen by a program officer or program director, who may or may not be responsible for evaluating the program. The evaluation may be managed instead by an evaluation director or officer. The grants management office has responsibility for setting up reporting systems that gather the needed data – but reporting systems may be set up before the evaluation design is finalized. Aligning these internal players is another facet of the evaluation challenge.
They also have unique cultures. Each foundation has its own culture that influences everything – including how evaluation is approached. Grantmakers for Effective Organizations (GEO) (2015) identified three organizational culture models that influence foundations and which often arise as a result of a foundation’s origin: banks, universities, and business. While each type of culture has its strengths — for example, understanding risk assessment, intellectual rigor and use of data, and a focus on results, respectively — each has some negative aspects. Bureaucratic processes, compartmentalization, and a focus on financial rather than community outcomes are holdovers from these cultures that can impede foundation effectiveness.
So, how can foundation evaluation be improved?
Invest in getting more and more diverse evaluators familiar with how foundations work.
Because evaluation consultants are often for-profit firms or individual consultants, foundations have not invested in their capacity building. Yet, they are a critical part of the infrastructure. Supporting mentoring and training for these consultants is in the sector’s interest.
Reframe evaluation as a management tool.
Evaluation should be built into the program strategy, not as an independent observer but as a function whose role is to bring facts and data to bear on the implementation.
Have internal conversations about your culture and how evaluation fits into it.
What kind of data does your foundation value? How does the board think about evaluation? How do internal power dynamics influence programming and evaluation? Patton’s Theory of Philanthropy may be a useful framework for these conversations.
Spend time up-front on the difficult conversations about what it is you are evaluating – and why.
Are you trying to develop a model program that will be replicable and scalable? Or are you trying to support change in a particular community? Are you interested in how your grantmaking is building or influencing a field of work? If the answer to all of these is “yes,” you may need to dig deeper into your strategy and theory of change before you are ready to effectively use evaluation.
Reference: Center for Evaluation Innovation, January 2020. Benchmarking foundation evaluation practices 2020