Tag Archives: data

Measuring Affinity

With a variety of internal and external data easily available, prospect researchers and analysts are able to provide greater insight to reach their organization’s goals. In order for data to help grow the prospect pipeline and inform decisions, one must be able to turn this data into valuable measurements.

I learned this first-hand in my initial attempt to create an affinity score to assist with prospect identification. To prepare for the task, I asked myself a series of questions. Some of those questions included:

What is affinity?

How do I define affinity for my organization?

Who would I consider has a high affinity for my organization?

What data is available to me that supports the statement, “a constituent has a high affinity?”

Do all significant donors or volunteers represent their affinity in the same way?

Which data points that are common between “high affinity constituents” and new prospects are, or are not, coincidental?

Which data points have greater ‘weight’ than others?

Which data points should have a maximum capacity in the resulting total score?

 

All of these and more were critical in my attempt to create a score.

Please note the use of the word ‘attempt’ above. I stress this because there is quite a bit of trial and error in the path to a final product. This is a project where one must continuously validate, adapt and iterate until the results successfully inform the decisions of your team.

Do not be afraid to try this on your own. There are services that can help with the process, but Excel is a great tool to begin the data manipulation required to calculate your score. Whether or not you use Excel or a specialized application for developing a score, it does not eliminate the need to question and understand affinity for your organization.

So I ask you, what is affinity for your organization?

1 Comment

Filed under Uncategorized

How to calculate date intervals in Excel

This incredibly useful function isn’t exactly mentioned in the Excel formula list.  It’s one of those “off-the-menu” items you have to know about.

 

DATEDIF computes the interval between two dates in Excel.  You could use it to compute any of the following

  • donor’s overall giving horizon (Date of Last Gift minus Date of First Gift)
  • number of months since a donor’s last gift
  • number of years between degree date (graduation) and first gift date

 

So … you’ve got the general idea, right?  This formula is crazy fabulous!

 

Here is the syntax (from Pearson Software Consulting):

DateDif

 

Just click on the image to read the full information available from Pearson about this formula and start using it next time you’re trying to filter, or prioritize or inform or just plain report out some basic information for a group of constituents.

signature2

Leave a comment

Filed under Uncategorized

Building a culture of data fluency

For the eleventy-hundredth time I read an article stating that organizations who base their strategic decision making on data out-perform organizations who don’t.

No longer surprising, right?

So what’s the big deal?  Why isn’t every organization taking full advantage of the data we so carefully input, record and store?

That question is also discussed a lot in these days too.  Seems that it’s pretty difficult to get people to change their behavior (eg, adopt a data driven mindset) if no one is comfortable with the concept of understanding data.

headline_forbes

The headline above is from an article in Forbes dated October 2014 by H. O. Maycotte.  Mr. Maycotte explains that complex analysis from a business point of view involves a LOT of data.  He says most people just don’t know where to start analyzing and frankly don’t have the right tools to help them accomplish the work.

 

The central issue is getting people comfortable with understanding the data related to the programs they support.

 

Recently, TechTarget published a case study highlighting an online-lending organization who is taking their employees through a week-long data boot camp to build data fluency throughout their organization.

Their goal is to be one of the companies who out-perform their peers by taking advantage of data and they’re equipping their employees with some essential skills:

  • asking for the data they need
  • summarizing their competed analysis
  • presenting their findings.

They’ve adopted new management policies to require hard facts to support all decisions.  So if you’re trying to get your department or program to move forward, you’ve got to be able to present your case.

Click on the TechTarget logo above to read the case study and find out more.  And for more discussion on the topic, click here.

signature2

Leave a comment

Filed under Uncategorized

Bookmark this – Metropolitan geography data base

If you’ve ever attempted to conduct analyses of your constituency across a geographic area, then you’ve no doubt gained an appreciation for metro city coding.  These numeric identifiers group clusters of city names together into a single designation suitable for counting, sorting or reporting.

Imagine life without city codes.  Let’s say we wanted to count constituents in Houston.  Our work wouldn’t be complete unless we also counted constituents from every single named town in and around Houston, like Kingwood, Spring, The Woodlands, Humble, Sugar Land, Bellaire, Pasadena, Webster, Katy, Pearland, etc.

Talk about craziness!  I can sense we’re in agreement.

 

Anyway, just the other day in my quest to translate a few city code to city names (off-the-cuff) I stumbled upon this great online resource!

tbed

This easy-to-use query tool allows you to drill into a city code to see which townships compose the metro area and it features a simple search bar.

 

tbed is the newest addition to my bookmark folder!  Click on the logo above to give it a try!

signature2

Leave a comment

Filed under Uncategorized

Bookmark this – Earnings chart for 820 occupations

The Bureau of Labor Statistics compiled average wages by occupation type for 820 different jobs in America.  The chart below highlights the top 20:

 

Nice work!  Be sure to click on the image above to get to the original source listing all 820 occupations.

signature2

Leave a comment

Filed under Uncategorized

Bookmark this – US metro economic indicators

If you work in a nonprofit organization, national economic indicators are part of the big picture.  These indicators paint a picture of economic disparities and highlight the areas of greatest need for families struggling to maintain their stability and independence. These indicators influence current and future demand for education, healthcare, housing, transportation and social services.  Thankfully, the Urban Institute compiles an incredible amount of insightful and useful data, dashboard style.  Their dashboard includes data compiled for 366 of the country’s most densely populated cities.  Give it a try!

 

 

Drill-down information for Houston–

houstonindicators

 

This dashboard is amazing!

signature2

 

Leave a comment

Filed under Uncategorized

Case study – impact report from Colorado State University

The 2015 CASE awards were just announced.  Each year I eagerly look forward to sifting through the winners to see the creative ideas our peers are rolling out in their fundraising programs.

 

Right off the bat this winner in the Stewardship category captured my attention, because of course, it is an impact report!  Colorado State University’s Veterinary school developed this wonderful report for their donors to communicate program measurements in a useful and easy to digest manner.

ColoradoState

 

I love the mix of philanthropy, patient load and education all on one page.  I can easily understand why the CASE awards committee recognized CVMBS with a gold award.

To view the full impact report, click on the image above.

signature2

 

Leave a comment

Filed under Uncategorized

The value of information – Lifetime Value

I read a Harvard Business Review article recently that cited a glaring omission in the nature of our traditional accounting systems. Our accounting practices focus on revenue, inventory, and assets, but assign no value, not even a mention, to information.  For businesses that heavily leverage information to succeed, that can be a big problem, and here’s why:  when no value is assigned to information, the costs associated with managing, stewarding and upgrading information become a low priority.  There’s no way to incorporate financial gains to an ROI calculation.

 

So what to do?

 

Well, here’s an idea, we could start citing constituent value in our activity reports, status reports and project reports.  And that starts with being able to assign constituent value.  One way to do that is by computing lifetime value.  KISSmetrics offers a great how-to for understanding and calculating lifetime value (click on the image below to view the full discussion).

LTV

 

Lifetime value, combined with other segmentation variables, such as membership in more than 1 constituency class, giving capacity, probability or recent interaction can help the emerging development of quantifiable information.

 

If you have an example you’d like to share, please let me know!

signature2

 

Leave a comment

Filed under Uncategorized

How much data do you use?

I was reading yet another article about leveraging unstructured text data, like contact reports, notes and comments in our data base.  For most of us in this profession, this is our equivalent of internally generated big data.

via sfdata.startupweekend.org

 

Boris Evelson, the author of this article, suggested that most organizations only utilize 35% of their available structured data to inform decision making and a mere 25% of unstructured data.

 

I have to wonder about that.  How can I even start to get closer to those numbers?

His article is a succinct how-to for getting started with text analytics.  And one of the first steps is to define the use cases for extracting intelligence.  What in the world is a use case?  It’s just a scenario for where to locate the data and the context of the information you’re looking to find.  For every question you have, outline where it might be and what it might look like in that location.  In a best case situation, you’d also identify what it wouldn’t look like, as in similar data that isn’t relevant to the question at hand.

 

For example, if you worked in a college or university setting and wanted to extract information from contact reports about constituents who participated in fraternities or sororities, that would be a description of a possible use case.  The output of the search would be your constituent ID number and the name of the fraternity/sorority.  Another possibility could be constituents that have told you that they have a second home or vacation home.  Another possibility could be constituents who have grandchildren.  When mining your text data, if your organization has a tendency to bury relevant lifestyle or affinity details in a big text blob, anything goes.

 

There are all kinds of use cases that get very complicated, but why not start simple, right?  To get familiar with the concepts and terminology, take a look at this article by clicking on the image above.

signature2

 

Leave a comment

Filed under Uncategorized

Working with big data

I’m talking about free text data.  We all have it and it’s not searchable.  All the meaningful intelligence lives in hiding.  For the past 3 years I’ve been scouring free text on my own and transferring key words into coded fields we can all use.  Finding, interpreting and transforming data is slow and time consuming so I’m also searching for a service provider who can automate the process within an acceptable margin of error.

 

I recently read this white paper that illustrated the process of analyzing big data, the figuring out how to speed up the process of transforming portions into fielded data that our systems can use.  Here’s a diagram that describes the iterative effort.

bigdataxform

 

 

What can nonprofits do to limit the manual involvement required to get to the insights?  First, figure out what frequently occurring patterns exist that have consistently uniform meanings.  It makes sense to build automated coded procedures to conduct the searching, selecting and transforming.  Here’s an illustration of that process.

 

bigdataxform2

 

Click on either of the images to read the full white paper.  It is full of ideas for injecting automation into the work of manipulating and preparing data for analysis.

 

signature2

Leave a comment

Filed under Uncategorized