You probably know our friend, Alexa. She doesn’t have much in terms of personality, or voice inflection for that matter, but she’s a wealth of knowledge. If you aren’t familiar with Alexa, perhaps you know one of her friends, like Siri, Cortana, or Google.
Our smartphones, computers, and in-home devices have transformed over the last decade into items that not only can we input data into, but can request data from. We can get a weather report, order a pizza, check the score of the game — all by simply asking. This technology is now commonplace in our homes, and so it stands to reason the next step is utilizing it in our grantmaking.
It’s not a far reach — the technology is here. We simply need to teach it the new tasks we want it to complete. To prove it, our CTO decided to train his Echo Dot to have Alexa pull status reports from our grants management database. He did it over a weekend.
But how did he do it? Why does it work?
Natural language processing
Natural language processing is the voice you hear when you ask Alexa to play you a song, Siri to take you home, or even the words you see when you start chatting online with a chatbot. It’s the technology that listens to your request, understands what that combinations of words or sounds means, and generates an answer.
This is no small task. Think about how difficult and ambiguous most language is — you have words with multiple meanings, word play, run on sentences, and more. When it’s spoken, throw various accents and pronunciations on top, and you realize just how complex it is to deduce a set of rules to process language.
Most natural language processing algorithms are fed tons of samples, and over time they start to develop pattern recognition to detect what the most likely patterns are for asking the same question. But these systems are only as good as the training they get. If you have an Echo and the Alexa app, you are actually part of that training. Open the app and look at your history — you’ll see for each command you gave the system the app asks you to vote if the device heard you correctly. As you click “yes” or “no”, you are imputing data points that go back to Amazon and are used to further train the algorithms.
Using natural language processing within the grants space has great possibilities. As our CTO showed, you could use it to quickly check the status of an application, or really pull any data you had in your workspace.
Or instead of you asking for help — you could enable your applicants to. Chatbots are artificial intelligence that either initiates or responds to chats on websites. Your constituents could get 24 hour answers to basic questions, and as an administrator, you would be freed up to focus on activities that generate the most impact.
If natural language processing is how a device is able to understand your request, deep learning is part of how it’s able to find your answer. Deep learning allows systems to analyze big data in ways that statisticians may not be able to think or have the time to do. When you add in deep learning, your device can go beyond pulling a data point from a defined field and, instead, look at all your data and come up with its own answer.
When AI first began, the focus was on machine learning — humans told the machine the facts it should know. For example, if the task was to have an AI device play chess, humans would not only tell it the basic rules of the game, but show it examples of common strategies, winning moves, etc. With deep learning no human intervention is necessary — the facts, in this case the rules of the game, are provided alone. The AI teaches itself and on its own comes up with the best strategies, and as we have already seen, becomes the more skilled player.
How does this apply to grants management? What if your system could look at all the data received from this year’s applicants, compare it with the applications and subsequent progress reports from previous years, and predict which new submissions hold the greatest potential? What if it could help you invest in the right people, projects, or service areas?
Bringing the possibilities to life
At this point, the question is not so much is it possible, but is it helpful? Would having any of these features improve your day-to-day, your constituent experience, or your ability to enact change?