Have you ever benchmarked against your university fundraising peers? Did you find it easy or hard? If you found it easy, you may have done it wrong.
Alright, maybe your Advancement shop benchmarked only for general information; that’s one thing. If you benchmarked for insights to act on – to inform decisions about staffing or performance expectations or institutional funding for Advancement – that’s something else. The comparisons had better be valid.
Getting apples-to-apples, as the cliché has it, is surprisingly hard work. You should be clear what you want out of it before you commit.
Our department reports to the university on the return of investment made in Advancement. It’s a handsome return, exceeding most things that go by the name “investment.” It would be strange if it wasn’t. But a positive return, even a handsome one, could be produced by a department that is underperforming, and performance issues should be addressed before the university considers additional investment. ROI alone lacks context – benchmarking provides context in the form of confidence in our performance in relation to our peers.
It was a journey. It took four years to get to the point where we felt assured of the comparability of the numbers. Here are some things I learned along the way.
First, having the right comparator group is essential. The credibility of the exercise hangs on it.
Second, work with an external facilitator. Universities used to have to initiate their own partnerships, but today a number of consulting firms and organizations are doing excellent work in this area. Benchmarking is valid only when the partners provide data that is prepared roughly the same way. It takes years of effort to align on definitions; do-it-yourself initiatives can’t be sustained long enough to yield value.
Third, don’t spread limited time and resources over multiple benchmarking efforts. Better to pick one group and stick with that group. (Unless you’ve got a lot of capacity.) The work of assembling the data falls to my team; when a new invitation to benchmark comes in, we look at it, but most of the time we decline to participate.
Fourth, nominate one person to own it, even if several people are involved. A director of finance will provide expenditure data, human resources will provide FTE counts by function, development reporting will provide fundraising totals – but one person, possibly an analyst with strong knowledge of the business, should be responsible for keeping an eye on annual deadlines and monitoring the quality of the submitted data.
One clear owner will also be better able to engage with his or her counterparts among the benchmark partners to ensure consistency in data definitions and processes. These conversations are more efficient when each partner sends only one or two knowledgeable people to the table.
And finally: This is important, and worth extra effort. The goal is having data that is comparable across institutions. The ROI calculation is very sensitive to how we count, both on the fundraising side and the expenditure side. Discrepancies among peer schools may be footnoted, but leadership is not reading footnotes. Multiple asterisks on everything degrades the value of the exercise.
Alert leaders to sources of variability that will affect the integrity of decisions – and work with your peers and the vendor/organization to make it better.