Skip to content
PEAK Grantmaking

Learning and Evaluation, A Conversation

This issue of GMNsight is co-produced byGEO (Grantmakers for Effective Organizations)

Michelle Greanias is executive director of Grants Managers Network. Kathleen Enright is the founding president and CEO of Grantmakers for Effective Organizations. Both organizations are dedicated to promoting those grantmaking and management practices that lead to more effective grantees. To kick off this issue of GMNsight focused on learning and evaluation, Michelle and Kathleen sat down to discuss some critical questions about philanthropy’s use of and approach to evaluation.

GEO’s 2014 national survey of staffed foundations reported that more than 75 percent of grantmakers evaluate their work. Why is a focus on learning and evaluation so important?

KE: Most of us got into this field for one reason: to make the world a better place. Learning and evaluation are critically important because they help us understand the progress we are making in the communities we serve. Without an intentional approach to learning, we’re just stumbling along without the insights we need to make better and better decisions over time.

Learning presents us and our grantees with an opportunity to reflect on our work and improve what we’re doing. There’s an important shift away from evaluation just as a tool for compliance to one that supports our own evolution and growth.

And this clearly isn’t a fringe idea: With so many inexpensive and readily available evaluation tools, it’s no wonder more and more grantmakers are pursuing learning as a critical component of their work.

MG: I couldn’t agree more. I am excited to see the field’s evolution from a “stack of reports on a shelf,” which was one of the top 10 flaws identified in grantmaking practices that launched the Project Streamline initiative, towards real learning and evaluation.

I think a critical factor in this shift is the increasing availability and accessibility of data. Technology has brought efficiencies to the grantmaking process, allowing grants managers to move beyond data management into a whole new level of value-added work which combines their technological, analytical and communications skills.We’re seeing a definite shift towards grants management being the hub of information and learning in their organizations, turning data and information into actionable knowledge for their organizations. This is breaking down silos in funding organizations. While this does pose some temporary challenges in defining roles and responsibilities as learning and evaluation moves beyond program staff, I think the benefits for grantmakers in engaging their entire teams in learning will far outweigh the momentary confusion.

It’s great that almost all grantmakers are evaluating their work — but is all evaluation created equally? Are there places where we are falling short or missing information, and why?

KE: That’s the thing—when we think of evaluation as a practice to just evaluate our work or as a way to only learn about the outcomes from the work we funded, we miss out on the context of what our grantees and partners are doing. It’s like looking at a Monet painting close up—we can see our own individual brushstrokes, but we miss the beauty of the full scene.

MG: Communication and openness will be critical to the future of philanthropy. I believe that grantmakers who continue to develop grantmaking strategies, make grant decisions and assess their impact in a bubble of self-generated information will find themselves left behind this field-wide push to maximize the impact of philanthropic investments.

KE: That’s exactly right: We’re learning, but too often we’re doing it in a vacuum. Less than half of the grantmakers we surveyed in 2014 shared their learning and evaluation findings with other grantmakers, their grantees, or other community partners and stakeholders. And in many cases, we hear of grantmakers who design onerous evaluations for their grantees without considering what the grantees need to learn themselves and whether they have the capacity to pursue evaluation of that depth.

We’ve known for years how critically important evaluation is for us grantmakers—but it’s as if we pursue the work selfishly at times. We found that 87 percent of grantmakers use evaluation to report to their board and 65 percent use it to plan and revise strategies. That’s wonderful, but this is information that our grantees and the communities we serve can also be using to grow and improve. Or frankly they may have a different take on what the evaluation results mean—one that we will never hear if we keep the internal focus of our evaluation work.

I also think what we have to keep front and center in these conversations is that what we are really talking about is the grantee’s results, not the grantmaker’s. A grantmaker can evaluate how it makes its grants, but the outcomes of those grants belong to the grantees.As grantmakers, we can offer the types of tools that nonprofits need in order to be able to evaluate and improve their work, but far too often we fall short. So few of us offer evaluation capacity building support, it’s no wonder that many of our grantees struggle to evaluate their work. Until all participants in the evaluation chain embrace and support a learning focus, I think our ability to use evaluation results to increase impact will be limited.

We collect a lot of data and information from our grantees. How do we know if we are taking the right approach to learning and evaluation?

MG: I feel true momentum building in the field. On the individual level, more grantmakers are taking a step back and questioning how they’ve approached evaluation in the past. We see grantmakers pushing themselves to think thoughtfully and carefully about what really matters in assessing impact and what is practical for a grantee to provide. It even extends to practical decisions — simple changes that can make a huge impact on improving evaluation — like aligning report deadlines with a grantee’s program calendar so that we get better information about the results of our investment, rather than what happened 12 months after a payment was made.

KE: There’s no one-size-fits-all approach to evaluation. Grantmakers in different areas, working on different issues and with different grantees will find that certain practices are more successful and others less so.

What we have found is that there are certain evaluation practices that can really kick-start and amplify the effectiveness of any evaluation approach. There’s none greater than ensuring your learning approach engages and involves all of the players in your work, from your grantees to your community partners.

By bringing in real learning partners—whether they be grantees, fellow grantmakers, community members and leaders, or government and private-sector entities — we have the opportunity to design better learning approaches, learn more and ensure that everyone involved in the work is improving. We can work with grantees to design evaluation processes that allow everyone to learn. We can tap our community partners to help us collect a wider swath of data and results. We can call on experts in the field and our work to help us dig in and analyze this work. And by sharing our results broadly, we give everyone the opportunity to work better and smarter.

MG: We are definitely seeing more interest in exploring collaborative grantmaking and collective impact among grantmakers. The Bridge Project, Simplify, and other efforts to create the building blocks and systems for data sharing in the social sector will make it easier for us to tap into the wider world of shared learning that Kathleen is describing. I wonder and (if I’m honest) hope that this will lead to a whole different way of approaching our work in the future. Imagine what we can accomplish if we are able to redirect all of the resources currently devoted to capturing data and information in applications and reports—what is available already—toward understanding and acting on what is learned. What could we achieve then to make the world a better place?