Tim Ogden of Philanthropy Action pointed me to a post he wrote this morning about the myth of the percentage that organizations spend on overhead being a good proxy of effective organizations. pinged me this am about a post he wrote called, “The Worst (and Best) Way to Pick a Charity This Year.” It’s a terrific post about the reasons that the popular, yet totally artificial, use of overhead rates as a proxy for organizational effectiveness is, well, ineffective. He lists the key reasons as:
- It tells you nothing about the impact the charity has on people it’s trying to help.
- The rules for determining overhead costs are vague and every charity interprets them differently
- Accounting experts estimate that 75% of charities calculate their overhead ratio incorrectly
- It discourages charities from investing in tools and expertise that would make them more effective.
I couldn’t agree more with this assessment. It has always seemed ridiculous to me that every organization should have to try to conform to a standard level of overhead whether it suits their particular efforts or organization or not. The overhead for an opera company with its own space, for instance, is astronomical in comparison to a chamber orchestra that shares space with another group. That’s simply how those organizations operate.
My experience is that one result of this kind of arbitrary expectation is that organizations simply disassemble (a nice way of saying that they lie) about which costs are associated with programs and which are overhead. Dan Palotta writes and speaks quite passionately about the futility of behaving this way as a sector.
There are a few issues that Tim raises that are worth more discussion. The first is that his organization’s partners include GiveWell, Great Nonprofits, Guidestar, and PhilanthroMedia.But one in particular caught my eye, Charity Navigator, who have been among the most vocal organizations in promoting the overhead ratio as a useful measure for years.
On his blog, Ken Berger, the CEO of Charity Navigator wrote, “We do not agree with everything that is stated in this press release (we think overhead does have a place in rating charities, yet agree it should not be primary or overly emphasized) but we do concur with the fundamental truth that the most critical dimension in evaluating a nonprofit has to do with achieving meaningful results (we call them outcomes.” He goes on to state that he feels as though the organization hasn’t changed positions but simply that others weren’t hearing what they were saying.
The reason that people are surprised is because the efforts of watchdogs groups like Ken’s have been punitive about the use of the overhead ratio in judging organizations. This may have been inevitable but it is the data that exist. Once the tax forms, the 990s, became digitized that the data on them became the standard for judging organizations. It’s like using voting as a proxy for the strength of democracy, we use it because it is easy to count, not necessarily because it tells you anything of value. The financial data was right there, so groups like Ken’s used it. And loudly proclaimed which nonprofits were good and which were bad based on their overhead. And that was an easy story for the mainstream media to pick up and run with.
But what they did with it was largely more harmful than good because by emphasizing across-the-board standards of acceptable overhead rates, and using those standards as proxies for effectiveness, left no room for organizations to define it for themselves.
The second sticky issue is the fact that there isn’t an answer to this dilemma. But, how to go about defining and measuring effectiveness is really the $64,000 question isn’t it? Some of the partners listed above, like GiveWell and Great Nonprofits are dedicated to doing just that. The former through extensive research and comparison of organizational performance, the latter through crowdsourcing the best nonprofits. I’m not convinced that either method is great, particularly since neither is developed by the nonprofit organizations themselves. This is why I’ve been such a longtime advocate of participatory evaluation (for those who don’t know I founded Innovation Network, a national nonprofit that teaches nonprofits and foundations how to evaluate their programs.)
The answer is that there is no simple answer. However, I am awfully glad to see folks like Tim pushing back against the punitive pathway that we have been on of rewarding groups that are, or are pretending, to spend everything on programs at the expense of their organizations. Even if nothing immediately replaces it, busting the overhead myth is awfully important for nonprofits. I hope it spreads, particularly to those organizations that have taken great pride in announcing to donors that 99.9% of their donations go directly to programs, rather than trying to educate donors as to why it is impossible to run organizations that way.
So, let’s keep talking about how to assess effectiveness, but not at organizations, with them, as full partners in learning how to articulate appropriate measures of both process and outcomes and then how to actually go about measuring those things.