To be most effective, grantmakers need to think about those practices that matter most to nonprofits and how their behavior impacts the nonprofits they serve. Grantmakers for Effective Organizations (GEO), has been working hard to listen and learn from nonprofits around the country to better understand what grantmakers can do to help them achieve better results. Throughout our strategic planning process in 2017, GEO met with dozens of nonprofits and grantmakers, and also convened a Nonprofit Advisory Council, made up of 14 nonprofit leaders from around the country. Throughout all these conversations, GEO learned in greater detail what it takes to be effective on the ground and in communities and was able to prioritize work that supports nonprofits.
While these nonprofits provided input on lots of topics, many of the nonprofit leaders had the same key question:
What do you even do with all that data you collect?
This post answers this question as informed by GEO’s field survey results, discusses why it’s important to consider all the uses of data collected through both evaluations and grant reporting and provides a few guidelines for grantmakers who want to change their behavior to better support nonprofit effectiveness.
Background on GEO’s Field Survey
Since 2003, GEO has conducted a periodic survey of all staffed grantmakers in the United States. This field survey focuses on the practices that lead to nonprofit success, like strengthening relationships, providing flexible and reliable funding, capacity building, collaboration, and learning and evaluation. In 2017, GEO heard from about 650 grantmaking organizations, including independent, community and corporate foundations of all sizes and regions of the country.
What do we mean by Learning and Evaluation?
For the purposes of this survey, GEO defined evaluation as “the systematic process of asking questions, collecting information, and using the information to answer those questions.” The shape of this evaluation may look different for each grantmaker – some may prefer to partner with a third-party evaluator, while others may conduct their evaluations in-house. Some grantmakers may think of their evaluation practices as being separate from the reporting process, while some may use the information in their grant reports to inform their broader evaluation and learning strategy. For others, grant reports provide the primary source of data from grantees and are used both for individual grantee assessment and for evaluation at a portfolio or program level.
In order to be effective, it is important for all organizations to learn, reflect and improve. However, grantmakers should remember the key purpose of evaluation and learning. As we state in our publication Four Essentials for Evaluation, it is about “advancing knowledge and understanding among grantmakers, their grantees and their partners about what’s working, what’s not and how to improve their performance over time.” We shouldn’t be in the business of evaluation for the sake of evaluation but should instead think about evaluation and learning to help us answer key questions. And as grantmakers and nonprofits, we are out to answer some big questions. As grantmakers and nonprofits partner in this work, it is critical to remember that evaluation work is only successful if nonprofits are able to learn from it as well.
Through the data available in the field, both from GEO and other infrastructure groups, one common theme emerges: funders are missing key opportunities to further support their grantees through their use of evaluation and reporting data. In order to be more effective, we need to make some changes.
What did we find?
Some of the results in GEO’s 2017 field survey our results are encouraging. Nearly 80% of grantmakers are evaluating their work, and while we think it is important for all grantmakers to evaluate the work they fund, this clearly shows that evaluation and learning are norms in the field. Corresponding research from the Nonprofit Finance Fund in 2015 shows that almost 90% of nonprofits say they are asked to collect data and capture the effectiveness of their programming.
But let’s revisit the question from the nonprofits: what are we doing with all this evaluation data? The results here are disconcerting. According to GEO’s 2017 field study, less than half (45%) of grantmakers are sharing their evaluation findings with grantees or stakeholders. The same percent (45%) are sharing their results with other grantmakers. The most common uses for evaluation findings were to serve internal purposes, like reporting to the board (90%) and planning/revising strategies (61%). These responses have remained steady over time.
This echoes research from other infrastructure groups as well. As PEAK Grantmaking has discussed , “funders are missing the connections, lessons, and relationships that grant reporting could and should be making”. A 2017 PEAK survey found that grant reports are mostly used for purposes that primarily serve the foundation, like “accountability, documentation, and individual PO learning – not field building”.
This stark divide between how grantmakers use evaluation and reporting data internally and externally suggests that we are using our resources and positions of power to serve our own purposes, while ignoring the learning needs of our nonprofit partners and disregarding the insights they might bring to enrich our learning. In order to increase impact in our communities, we must do better.
What can we do?
To change this trend in how evaluation data is used, grantmakers need to change the way we think about learning and evaluation. In GEO’s 2012 publication Four Essentials for Evaluation, there are four guidelines that can help grantmakers share and collaborate with nonprofits, other grantmakers and community partners to ensure evaluations produce meaningful results:
1. Learn with (and from) your grantees
- Use nonprofit cohorts, learning communities and other strategies to make sure we have our finger on the pulse of what nonprofits are learning.
2. Reach out to other grantmakers
- Bring other funders to the table so we can ask questions and share what we’re learning about what works and what doesn’t.
3. Get aligned
- Work with other grantmakers, nonprofits and other partners to make sure we’re trying to answer some of the same questions and to streamline evaluation and reporting for nonprofits.
4. Learn in public
- Don’t wait until a final, glossy, post-project report to share what we’re learning. Share our lessons along the way.
In addition, grantmakers in all roles can ask questions about our organizations’ evaluation practices:
- Consider your organization. How do you currently use evaluation data?
- Recognize some of the alternatives. How else could you use your evaluation findings? What opportunities are you missing?
- Think about the big picture. Who is benefitting from the evaluation data you collect? Are the results kept “in-house”? Or are you sharing this with your community or other stakeholders?
By taking a more critical look at how we are using the information we have at hands, grantmakers can play a leading role in helping nonprofits to achieve their missions and help our communities thrive.