It’s a given that I’m a fan of data. Precise indicators that give the ability to establish baseline counts and segments. However, in our world we’re often left to make assumptions about whether the numerals have consistent meaning from constituent to constituent. Nothing can replace the rich information we’re able to gather when speaking directly with a constituent. And surveys are a perfect method to engage and inquire, in a very specific way.
The quote above is from a New York Times article by Tim Lahan. He describes how too often we look at big data (such as email clicks, social media likes, etc.) as insightful when true insight can only be gleaned by placing the clicking behavior in context of the user experience.
So instead of assuming that our constituents feel more connected to us as a result of mailing out a big report or sending a video, why not ask? The information could be used to start forming content preference models and we could spend our communications budget delivering content that resonates with the recipients.
Click on the quote above to read Mr. Lahan’s article.
I was reading yet another article about leveraging unstructured text data, like contact reports, notes and comments in our data base. For most of us in this profession, this is our equivalent of internally generated big data.
Boris Evelson, the author of this article, suggested that most organizations only utilize 35% of their available structured data to inform decision making and a mere 25% of unstructured data.
I have to wonder about that. How can I even start to get closer to those numbers?
His article is a succinct how-to for getting started with text analytics. And one of the first steps is to define the use cases for extracting intelligence. What in the world is a use case? It’s just a scenario for where to locate the data and the context of the information you’re looking to find. For every question you have, outline where it might be and what it might look like in that location. In a best case situation, you’d also identify what it wouldn’t look like, as in similar data that isn’t relevant to the question at hand.
For example, if you worked in a college or university setting and wanted to extract information from contact reports about constituents who participated in fraternities or sororities, that would be a description of a possible use case. The output of the search would be your constituent ID number and the name of the fraternity/sorority. Another possibility could be constituents that have told you that they have a second home or vacation home. Another possibility could be constituents who have grandchildren. When mining your text data, if your organization has a tendency to bury relevant lifestyle or affinity details in a big text blob, anything goes.
There are all kinds of use cases that get very complicated, but why not start simple, right? To get familiar with the concepts and terminology, take a look at this article by clicking on the image above.
I’m talking about free text data. We all have it and it’s not searchable. All the meaningful intelligence lives in hiding. For the past 3 years I’ve been scouring free text on my own and transferring key words into coded fields we can all use. Finding, interpreting and transforming data is slow and time consuming so I’m also searching for a service provider who can automate the process within an acceptable margin of error.
I recently read this white paper that illustrated the process of analyzing big data, the figuring out how to speed up the process of transforming portions into fielded data that our systems can use. Here’s a diagram that describes the iterative effort.
What can nonprofits do to limit the manual involvement required to get to the insights? First, figure out what frequently occurring patterns exist that have consistently uniform meanings. It makes sense to build automated coded procedures to conduct the searching, selecting and transforming. Here’s an illustration of that process.
Click on either of the images to read the full white paper. It is full of ideas for injecting automation into the work of manipulating and preparing data for analysis.
Introducing Geomancer – a free and easy way to append publicly available demographic information to your excel spreadsheet by matching to your City, State or Zipcode columns.
Seriously, in less than 1 minute I went from this (very generic test file)
to this enhanced file – I chose the 3 appended columns from the options available based on City & State in my original sheet.
Just to be sure, I tried it again with zipcode matching. Here’s the before–
and the after!
Seriously, 1 minute. Easy and free. What’s not to love?
OK, what is analytics, anyway? It is the study of information that yields meaningful, valuable insight into constituent activity and behavior. Analytics can be extremely useful to identify segments of risk (donors who are not likely to give again) and segments of opportunity (donors who are most likely to be a major gift donor).
But what does big data have to do with it? We already get wealth screening.
Big data is far beyond wealth screening. This recent blog post offers a good example of how art museums capture big data from their visitors. Big data are composed of the electronic fingerprints we leave everywhere we go with credit card purchases, cell phone geo-tracking, and online interactions. Big data tells us when, where, how much and what occurred.
Organizations can collect big data by installing sensors (how many people enter the campus book store and in what places do they linger?), acquiring data from existing monitoring devices (bedside vital sign measurement equipment in hospitals), or producing an app for constituents to download on their smart phones, for personalized interaction (and data collection, of course).
This is where marketing is already going. Those of us in nonprofit development services must maintain an awareness of information trends that are driving advances in society and business. To see the full infographic (from NEC) click on either of the images above.
FC has published their annual industry blueprint about the social economy, including big ideas that matter for 2015.
The author, Lucy Bernholz, reviews events of the past year and the way they played out through the dynamics of digital society. She lists set of civil society trends and the way these parlay into matters affecting philanthropy. Topics/trends about big data, data privacy and mobile accessibility are clearly part of the landscape of our lives, and consequently, part of the landscape of our profession.
To give us some clues for catching up with the trends, Ms. Bernholz gives us some important buzzwords that we’ve probably heard and will continue to hear more about. Here are a few:
- Artivists – the intersection of promoting ideas for change with art as an impactful communications medium
- Internet of Things (IoT) – microchips and sensors in our appliances, wristwatches and even garments are becoming more prevalent and offering access to unprecedented quantities of data.
- Pivot – Silicon Valley speak for changing the trajectory of a plan when measurements suggest the original plan isn’t achieving expected results.
Click on the image above to access this fabulous report!
Pursuant recently published a white paper titled ‘Four trends that will change the nonprofit landscape by 2020.’ When I saw the title, I definitely had to download it. Click here to read the full article by Curt Swindoll, Executive VP, Strategy at Pursuant. Here are the highlights:
Mr. Swindoll includes a discussion in each section outlining things your organization can start doing now to keep abreast of the predicted trends.