Skip to content
PEAK Grantmaking

How Surdna Centers Trust and Learns With Grantee Partners

When foundations seek to improve and assess the impact of their work (and not all foundations do), questions abound. Some look inward, asking: Does our portfolio of grants support our strategy and intended organizational outcomes? Are we moving the needle? Is our work still relevant in a constantly changing world?  Others grapple with the approach: Should we focus on learning or evaluation?
All of these questions are important to the ongoing work of any foundation. But taken alone, they often miss an essential element: the grantees.

Over the years, the Surdna Foundation has developed a culture of learning in service of grantees that has been guided by three fundamental questions: What can we do, beyond giving grants, to help partners solve societal problems? How can we learn with and in service of grantees to accelerate racial justice? How can we remove barriers and inefficiencies to our processes? We believe these questions will lead us to better results in the long run. For us, the journey is as important as the destination.

The path to learning together

Learning and evaluation aren’t mutually exclusive. Both offer accountability and continuous improvement. But the words themselves are loaded.

For some, “evaluation” connotes a sense of objectivity with clear, data-driven results. To others, it implies that foundations are judging grantees for hitting predetermined metrics, as though grant outcomes were sales quotas. “Learning,” on the other hand, can feel more approachable and open to ideas that transcend metrics—or it may seem like a soft science, lacking in rigor. Most foundations choose their approach based on philosophy, organizational culture, and, to a lesser extent, the nature of the issues a foundation is working to improve. For example, a quantitative evaluation process might be better suited to the work of a funder trying to determine the efficacy of a drug trial than one working on racial justice. But most funders can benefit from hard and soft data alike.

Finding our comfort zone in terms of measuring our progress and impact has been a long and winding path, and not without some potholes along the way. Though we all agreed that accountability and improvement are important, there were competing tensions about how best to approach measurement, including different opinions about using (presumably objective) data, fear of how the data will be used (“Is the data being used to judge my work?”), how the data might affect future funding decisions or program strategies, and even questions of who should have the power to make these decisions.

There have been a number of well-intentioned but ultimately failed approaches to measuring success over the years. So what were we missing? In all of these efforts, we overlooked three things: centering and giving voice and power to grantees, clearly articulating why measurement mattered to us, and building trust and buy-in between program staff and management.

We are at our best when we focus on identifying practitioners making change in their communities, creating networks of support for them, allowing them the freedom to do their work with a minimum of bureaucracy, and providing resources beyond money whenever possible.

One constant in our work: We know how to make grants, and we’re pretty good at it.  We are at our best when we focus on identifying practitioners making change in their communities, creating networks of support for them, allowing them the freedom to do their work with a minimum of bureaucracy, and providing resources beyond money whenever possible. And, of course, we are committed to distributing funding effectively and efficiently, and supporting our grantees to do their work to the best of their ability.

Following the outset of the pandemic, Surdna and many other funders leaned further into trust-based philanthropy, a sort of philanthropic Hippocratic Oath—“first, do no harm”—that has been gaining traction in recent years. This long-overdue shift caused many foundations to do away with nonessential grant reports and other paperwork, give greater general-operating and multiyear support, require fewer grant reports, and streamline their grantmaking processes.

One interpretation of trust-based philanthropy is the idea that we should make the grant and get out of the way. Despite the sector’s self-flagellation for bad practices, there are a lot of smart, hard-working, caring people in the foundation world, including many who spent years working in the very fields they now support. They can—and do—provide a lot of resources to our grantees beyond the grant, including sharing their knowledge. These two questions remain: What do our grantees need to grow and learn so they can continually improve their effectiveness? And, how can foundations support their learning journey?

What do our grantees need to grow and learn so they can continually improve their effectiveness? And, how can foundations support their learning journey?
Sharing knowledge, impact, and lessons

Working at the intersection of trust-based philanthropy and shared learning for the past few years, we’ve come to a few conclusions about what works well and what doesn’t.

First, creating vehicles for listening, learning, and power-sharing leads to trusting relationships with our grantee partners, and helps to create the conditions for collaborative learning. Here are some examples that we have found to be particularly effective:

  • Town halls. While our programs have always connected grantees with each other through conferences, convenings, and calls, we have formalized hosting virtual town halls throughout the year. During these meetings, program staff and grantees share information about their work, fields of interest, challenges, and how Surdna can be most helpful, providing real-time feedback and deepening trust. These forums also play a vital role in connecting grantees for knowledge sharing and in holding ourselves accountable to grantee partners.
  • Learning cohorts. Currently a pilot program of our Inclusive Economies Program, the learning cohort convenes a group of seven to ten grantee partners to foster relationships and build projects together in ways that elevate insights to the entire portfolio of grantees and the field. Participants will receive a stipend as well as a grantmaking budget to recommend funding for projects that the foundation could not do on its own. We will evolve the program as we learn more.
  • Anonymous feedback. The largest effort for obtaining honest feedback is our participation in the Center for Effective Philanthropy’s Grantee Perception Report. Of course, feedback is only as good as our willingness to make changes based on the findings and to share the results with transparency—good and bad. For example, in our recent Grantee Perception Report, our grantees noted that we could be better at communicating clearly, consistently, and more responsively. As a result, we have taken several steps to address those concerns, such as providing better descriptions of our programs and strategies through our website, and communicating proactively with grantees through a new newsletter.
  • Participatory grantmaking. Efforts to transfer power to those closest to the problems Surdna aims to remedy have produced new ideas, solutions, and insights from which everyone can benefit. Two examples are the Visionary Freedom Fund and the Amplify Fund. The former was launched by the Andrus Family Fund (a part of Surdna) to strengthen the conditions necessary for youths who identify as Black, Indigenous, or a person of color to thrive. The latter is a national, place-based, pooled fund that allows communities to have a more powerful voice in the community development decisions that directly affect them. Both funds ensure that movement partners are at the decision-making table.

Efforts to transfer power from the foundation to those closest to the problems we aim to remedy have produced new ideas, solutions, and insights from which everyone can benefit.

Our second insight is that data is essential to learning, but it must benefit grantees as well as funders. We need to give grantee partners and other stakeholders a seat at the table when deciding what data to collect, and how it will be used to understand progress toward our collective goals.

  • Metrics and indicator data are for learning, not judging. Together with grantees, we determined what outcomes we are aiming for, what we need to know about the progress of grant activities, which field-level indicators matter, and, above all, how that information can be used to help grantees, not judge them. To that end, we invited grantees to engage with us to co-develop our program outcomes and strategies, compensating them for their time, and then spent the better part of a year determining the data that mattered the most for tracking the wins, challenges, and lessons along the way.
  • Sharing is caring. The metrics and field-level indicators we collect (both quantitative and qualitative) don’t go into a black hole, the way so many grant reports traditionally do. Instead, our staff compiles and analyzes the information, looks for insights and trends, and then shares what we’ve learned with grantee partners. These learnings can inform our feedback loops, such as the town halls and learning cohorts mentioned above, and areas we need to emphasize in our programs.
  • Data informs support beyond the grant. Alongside efforts to streamline the grant application and reporting process, we are looking at ways to use this information to provide support beyond the grant. For example, we recently changed our grantee financial review process, relying on publicly available documents for the bulk of grant applicants rather than customized budgets. With tools developed by Surdna and BDO FMA, we automated our financial review process, saving a lot of time for Surdna staff and for grantees, most of whom no longer need to provide any financial information. Instead, we spend our time on the organizations who need our help. Rather than disqualifying organizations that may be facing financial headwinds, we offer 1:1 financial planning consulting through BDO FMA, or a financial consultant chosen by the grantee, at no cost to them. An additional benefit of the tools we developed is that they provide us with data we can use to measure the financial health trends of our grant portfolios, helping us understand whether Surdna’s grantees are becoming more or less financially stable over time.

Based on the early success of this program, we have launched the Resilient Organizations Initiative, which offers a suite of capacity-building tools that grantees have asked for, including fundraising and technology planning, with expansion into other areas in the future.

The most important element is trust! 

While we could attempt to devise a learning system based on irrefutable data, the result would likely be a costly combination of information designed to serve specific purposes, missing context, and a set of top-down, extractive relationships.

By entering into trusting relationships between the foundation and our partners in the nonprofit sector and philanthropy, we will make steady progress, learning along the way what works and what doesn’t.

Will we reach a point where our learning provides incontrovertible proof that our grants will achieve the lofty outcomes we set out in our foundation’s mission and program strategies? Not any time soon. Both our grantees and Surdna will make mistakes, fail to anticipate changing landscapes, and need to reconsider which data matters from time to time. But we will be better informed than we were before, and it is our intention to help our grantee partners be better informed as well. By entering into trusting relationships between the foundation and our partners in the nonprofit sector and philanthropy, we will make steady progress, learning along the way what works and what doesn’t. And that will benefit us all.


Surdna’s Inclusive Approach to Learning

In phase one of a new pilot program, the Surdna Foundation’s Inclusive Economies and Learning Grants Operations teams joined with a dedicated group of grantee partners to cocreate a set of metrics and indicators that measure progress toward collective goals.

Measuring Together: A Learning Approach for Inclusive Economies reports on key takeaways from the first year:

  • Measurement for learning; not proving impact. The pilot focused on establishing a system for learning, rather than evaluating whether grantee partners reached their targets.
  • Timing matters. This pilot coincided with the pandemic and influx of temporary federal dollars, illustrating that while the snapshot of learning around these metrics is important, we need to look at a long arc of data before we can make conclusions about field-level impact.
  • Economic narrative change is an emerging theme. Several organizations reported on narrative change work, highlighting issues such as preemption, fair wages, and working conditions.

Read the report at surdna.org/measuring-together