Outcomes are great for telling you where you’ve been, but what about where you’re going?

It seems impossible today to attend a webinar or conference on grantmaking and not hear about the growing importance of Outcomes or Impact Measurement. In the for-profit world however, Predictive Analytics and Big Data seem to be what everyone’s talking about. Why is that?

Predictive analytics helps Target to know if their customers are pregnant and Amazon is using it to stock exactly the items their customers will order, just before they order them.  Of course, the great promise of these two technology trends can be seen in the development of the self-driving car.  These vehicles will use the information from billions of data points, combined with machine learning modules to create a vehicle that’s efficient, safe and capable of taking passengers to a destination down the block, or across the country (or that’s the hope anyway).

What many grantmakers are starting to ask now is “can predictive analytics play a role in my grantmaking, and, if so, what’s that role?”

Analytics gives us the ability to look for trends in data.  Predictive Analytics uses those trends to create models that can predict future events.  Of course, because it’s the future, we have to live with the chance of being wrong.  Naturally, that means the more certain we are that a prediction is accurate, the more valuable that predictive model becomes.   Simply put, a model with 96% certainty is a ‘better’ model than one with 85% certainty.

From there  the next question that arises is:  “how can we create the best models possible so we have more confidence that our prediction is correct?”  Luckily, there are steps we can take to tackle this question.

First, and probably most important is to ensure we have a common, well defined objective.  While this might seem obvious, often it’s not that simple in practice.  Questions like “what do we want to predict?”, “how do we measure it?”, “what do we do with the results?” and “where can we go wrong?” plague even the most seasoned data scientists.  This, when combined with a need to develop and understand consensus can be a challenging task.

Once we have a clearly defined objective, the next major task involves the data itself.  At this stage, we need to be able to address questions like, “are we collecting the data we need to speak to our objectives?”, and “are we confident that the data we are collecting is correct (or as correct as reasonably possible)?”.  In practice, this step is often interwoven with the objectives step.  Sometimes we need to adjust our objectives, as the data may not support answering the objectives with confidence.

After the objective has been established and the data verified, the next step is to analyze the data and to create the predictive model(s). The key at this stage is to  understand the model (s), and their application.  The good news here is that software such as SmartSimple Janus has been specifically designed to provide a powerful platform for analytical Modeling.

The payoff from predictive analytics is what should get grantmakers excited.  The laundry list can include awarding grantees with the the best potential for success, doing more with less, justifying actions,protecting against fraud, and avoiding waste. These are only some of the advantages that a well designed analytics program can deliver.

So you’re ready to get started, what’s next? Download our Intro Guide to Analytics for Grantmakers. Or if you’re ready to chat with our Advanced Analytics expert, email us at sales@smartsimple.com or call 1-866-239-0991.


Tim Daciuk

Tim Daciuk is Director, Advanced Analytics, with SmartSimple. Tim helps clients understand the value of predictive analytics and how it aligns with their grants and funding strategies. He provides everything from a business understanding to in-depth technical work. Additionally, Tim is an accomplished speaker and has spoken at conferences around the world.Tim has 30 years of experience in statistics, data mining, and predictive analytics. Of late, Tim has specialized in the use of data and text mining and how these technologies can be applied in different industries.

  • David Goodman, PhD

    Tim, this is great advice and I love the idea of building the capacity of organizations to use predictive models to assess their work and inform their decision-making. My only request is that we as an industry / sector do more to ensure that the “models” that they are using to assess and make predictions are accurate and meaningful in the first place. This involves a lot more work to ensure that the objectives, outcomes, and indicators are appropriate and meaningful based on scholarly research or credible practice, as well as incorporate the context in which the work is implemented. This allows us to minimize the likelihood that our model is incorrect or misspecified and that some other factor (outside of our model) could be driving our results. I suspect that you would agree that we need more of this formative work to make sure that our summative work is meaningful and precise. Only then can truly be certain of our impact and have confidence in sharing our findings to expand that impact through other organizations and in other settings.