Benchmark with purpose

Have you ever benchmarked against your university fundraising peers? Did you find it easy or hard? If you found it easy, you may have done it wrong.

Alright, maybe your Advancement shop benchmarked only for general information; that’s one thing. If you benchmarked for insights to act on – to inform decisions about staffing or performance expectations or institutional funding for Advancement – that’s something else. The comparisons had better be valid.

Getting apples-to-apples, as the cliché has it, is surprisingly hard work. You should be clear what you want out of it before you commit.

Our department reports to the university on the return of investment made in Advancement. It’s a handsome return, exceeding most things that go by the name “investment.” It would be strange if it wasn’t. But a positive return, even a handsome one, could be produced by a department that is underperforming, and performance issues should be addressed before the university considers additional investment. ROI alone lacks context – benchmarking provides context in the form of confidence in our performance in relation to our peers.

It was a journey. It took four years to get to the point where we felt assured of the comparability of the numbers. Here are some things I learned along the way.

First, having the right comparator group is essential. The credibility of the exercise hangs on it.

Second, work with an external facilitator. Universities used to have to initiate their own partnerships, but today a number of consulting firms and organizations are doing excellent work in this area. Benchmarking is valid only when the partners provide data that is prepared roughly the same way. It takes years of effort to align on definitions; do-it-yourself initiatives can’t be sustained long enough to yield value.

Third, don’t spread limited time and resources over multiple benchmarking efforts. Better to pick one group and stick with that group. (Unless you’ve got a lot of capacity.) The work of assembling the data falls to my team; when a new invitation to benchmark comes in, we look at it, but most of the time we decline to participate.

Fourth, nominate one person to own it, even if several people are involved. A director of finance will provide expenditure data, human resources will provide FTE counts by function, development reporting will provide fundraising totals – but one person, possibly an analyst with strong knowledge of the business, should be responsible for keeping an eye on annual deadlines and monitoring the quality of the submitted data.

One clear owner will also be better able to engage with his or her counterparts among the benchmark partners to ensure consistency in data definitions and processes. These conversations are more efficient when each partner sends only one or two knowledgeable people to the table.

And finally: This is important, and worth extra effort. The goal is having data that is comparable across institutions. The ROI calculation is very sensitive to how we count, both on the fundraising side and the expenditure side. Discrepancies among peer schools may be footnoted, but leadership is not reading footnotes. Multiple asterisks on everything degrades the value of the exercise.

Alert leaders to sources of variability that will affect the integrity of decisions – and work with your peers and the vendor/organization to make it better.

Measuring engagement can answer crucial questions, with a little more work

Measuring alumni and constituent engagement is no longer a new thing. Many Advancement shops do it. Not all of them have settled on a solid key performance indicator, or set of KPIs.

We are still evolving on this front. After measuring consistently for five or six years, now it’s time to consolidate what we’ve learned and align the tool with a new operating model for engagement.

A lot of work got us this far. We laboured over the specific components of engagement (giving, event attendance, volunteering, accepting visits, and other things) and how to weight them. We created a score for each individual, and developed some aggregate reporting.

The work was good, but now we need to understand the significance of our metrics and how they can spur action. More work lies ahead.

What questions to ask of our metrics? A few thoughts:

How deep? How successful are we from year to year in engaging the maximum number of people who were available to be engaged? What is the ratio of engaged to engageable? By engageable I mean all constituents who are contactable and genuinely available to us this year. The exact definition is arbitrary. If a person who graduated 15 years ago has never had any meaningful interaction with us, they are probably not “available”. Including them will dilute the KPI with people beyond the reach of our communication and programs. I suggest a ratio rather than a percentage of engageable; if someone not considered engageable does come to us, we can count them on the left side of the ratio without needing to add them to the right side as well.

How good? How successful are we in engaging who we want to engage? To what extent did we involve loyal donors, engaged alumni, major gift prospects, people with bequest intentions, influential community members, and other preferred, high-value constituencies? This measure of quality can be used to evaluate events, especially when eschewing large social shindigs in favour of smaller, higher-octane gatherings. Quality, not quantity – even in the all-digital era.

How effective? How successful are we in moving people in numbers from one stage of engagement to the next? We need to know what engagement looks like at each stage in order to properly locate individuals.

Getting to these answers requires us to move on from “what’s in and what’s out.” We need to define “engageable,” decide who’s in our favoured constituency, and figure out how to quantify our engagement pipeline goals.

Data is an expensive tool. We should teach people how to use it.

When you buy a tool you have to learn how to use it, or you’ve wasted your money. Our team understood this when we implemented a new CRM system: If frontline staff used it and used it well, the investment would deliver on the promise of facilitating advancement of the mission.

Data is also a tool. Managers and decision-makers will succeed if they know how to use data. The question is, what have we done to maximize on that investment?

During our CRM implementation, we had more than 50 working sessions with frontline staff – focus groups and training sessions that involved nearly everyone in configuring the software and applying it in their work. So many hours!

CRM was big, but our investment in data, spread over years, is much bigger. Like other advancement shops we have staff employed in the collection, creation, and management of data, staff who design and maintain the infrastructure for securely storing, assembling, and preparing the data, and staff who use the data to develop reports and business insights.

That investment far exceeds the cost of any CRM, yet has it been matched with an equivalent degree of training in its end-use by managers and decision-makers? For us and many other organizations, the answer is no.

Operations can get very good at translating between the data and the business, but staff across Advancement must be able to speak the language. Author and advisor Bernard Marr says, “… organizations that fail to boost the data literacy of their employees will be left behind because they are not able to fully use the vital business resource of data to their business advantage.” (1)

Organizations large and small, in every sector, are coming to this realization. A 2019 Gartner survey found 80 percent of organizations now plan to start developing staff competency in data literacy. (2)

Data literacy simply means the ability to understand data in the context of one’s business knowledge. It includes knowing where the data comes from, how it’s defined, the methods used to analyze it, and having a view to applying it to achieve an outcome.

You don’t need to be a mechanic to drive a car. You don’t need to be an analyst to make decisions with data. The next big leap forward in data-informed decision making might lie in helping more and more staff across the organization learn how to drive.

  1. Why Is Data Literacy Important For Any Business?” by Bernard Marr (see also “What Are The Biggest Barriers To Data Literacy?“)
  2. Design a Data and Analytics Strategy,” Gartner Inc., 2019

Managers and decision-makers in Advancement must learn to speak data

Without the right culture, a great analytics team with all the computing power in the world is a brain without a soul. Large corporations are discovering that technology is not the obstacle to becoming data-driven. People, process, and culture are the obstacle.

Our shops are no different. Our business intelligence analysts are talented, our data is of high quality, and we’ve got the software and infrastructure. But analytics maturity is a whole-organization effort.

Analytics practitioners require three things of managers and decision-makers: Context, challenge, and action.

Context: Analysts learn about business context through the discovery process that precedes analysis. But decisions are owned by people who run programs. They need to approach findings with understanding, not blind faith.

Challenge: Donor behaviour is complex and not amenable to easy and definitive answers. Interpretations can and should be challenged. Analytics teams are not service desks – insert a question and out pops an answer. The process is a conversation, not a transaction. A conversation can only occur between different perspectives that nevertheless carry equal weight.

Action: Analysis, to be effective, is not merely informational. It leads to action. An analyst can’t act, only the manager or decision-maker can.

Operations can get very good at translating between the data and the business, but staff across Advancement must be able to speak the language.

Advancement can learn from corporations’ failure to become data-driven

Organizations large and small have invested heavily in data management systems, BI software, infrastructure, highly skilled data scientists, and tools to gather the data itself. Large corporations have spent like crazy on big data and artificial intelligence, and plan to spend more. Yet they are failing to become data-driven.

A majority of technology and business executives in a 2019 survey reported that they have yet to create a data-informed culture and an organization that competes on analytics. These are players like American Express and General Electric. (1)

What these corporations are discovering is that technology is not the obstacle. People, process, and culture are the obstacle.

Higher education advancement shops are not large corporations but most of us can relate. We are not data-first, evidence-based organizations.

We and other universities have done all the right things. We’ve improved the depth and quality of our data, we track and measure not only our own activity but engagement activities of our constituency, we have brought on talented BI analysts, and we have better tools for data staging, reporting, and analysis.

Like a large corporation, we’ve beefed up operations. But analytics maturity is more than technical capability. It’s time to consider the whole organization.

I define analytics maturity as consistently making strategic decisions that are informed by data. We succeed at working with individual teams on ad hoc, tactical decision-making, and that’s real progress toward maturity. Let’s keep going!

1. “Companies Are Failing in Their Efforts to Become Data-Driven,” by Randy Bean and Thomas H. Davenport, Harvard Business Review, 5 Feb 2019

Data analytics maturity: Don’t leave people and culture out of the equation

Do a search on “analytics maturity model” and you’ll get a lot of results that focus on pure technical capability. The typical progression starts with understanding the past, to making probabilistic statements about the future, to determining how to actually influence outcomes. This is one valid measure of maturity, but as a model for Advancement it is incomplete.

The progression runs like this:

  1. Descriptive analytics (reporting and business intelligence that answers “What happened? What is happening?”), advancing through …
  2. Predictive analytics (forecasting or predictive modeling that answers “What will happen? Who is most likely to do ‘x’ in future?”), and reaching a pinnacle at …
  3. Prescriptive analytics (answering the questions “Why did ‘x’ happen” and “How can we make ‘y’ happen?”).

As a result of investments in people, skills, and tools, our shop is proficient in descriptive and predictive analytics. Aside from potential one-off projects, though, we do not aspire to tackle the peaks of prescriptive analytics.

Prescriptive analytics seems more suited to mechanistic systems producing massive amounts of data, not social systems made of complex human behaviour producing relatively small amounts of data.

We might go there someday, but getting crazy sophisticated with analytics is not today’s goal. The goal is to consistently use data to inform decision-making. The tool could be machine learning, or it could be basic bar charts.

Some maturity models are better than others. Find one that addresses the people and process dimension, that enables you to assess the maturity of your culture of decision-making across the whole organization.