Reprise: Charity Navigator Makes Tweaks, Misses Opportunity

Imagine you’re an established major league baseball player, with a generous contract. You’re mired in a long batting slump, and the fans have grown restless. The New York Times, no less, publishes a story announcing that you’ve changed your approach and are about to break out. The next day you’re at bat, and the opposing pitcher lobs a big, slow, hanging curve right over the middle of the plate. You swing….

… and you miss the ball by six feet.

In this case, “you” are an analogy for Charity Navigator (CN), the prominent evaluator of nonprofits. CN has had fundamental problems with its methodology, as Public Interest Management Group detailed in a white paper we published this past winter. I also wrote about this topic in a recent blog entry. The issues were so serious that we recommended charitable donors and nonprofits use alternate means of evaluation until such time as CN corrects key issues. It was therefore encouraging to see the Times article announcing forthcoming CN changes this month. Could CN be righting its ship?

Unfortunately, the answer is a resounding “no.” After a lengthy review process, CN has made minor tweaks, and failed to adopt practical changes that could have instilled integrity into a system that has struggled to meet its potential.

The problems, in a nutshell, were (and still are) that CN’s Financial Health Rating has not actually gauged financial health. Worse, this rating method has undermined efforts within the nonprofit sector to counter The Overhead Myth, the idea that spending on such things as administration and operational infrastructure is inherently wasteful (a fallacy that can lead donors to restrict gifts in bad ways). Further, CN has no rating system for effectiveness in achieving mission-related results. Consequently, donors haven’t received the information they need most, and CN ratings have falsely portrayed a seal of approval (or disapproval) of rated nonprofits.

All of the core flaws are still present, despite updates to the methodology.

Without going too far into the weeds, here’s what CN has changed:

  • One of the seven components of the Financial Health Rating has been swapped out; liabilities to assets ratio has replaced program revenue growth.

  • Data for several component metrics will be averaged over three years, rather than just the most recent year.

  • It’s now possible to score a perfect “100” for Financial without having zero administrative expenses—instead, most nonprofits can spend up to 15% here.

  • Adjustments have been added for two unusual special cases in nonprofit accounting, indirect cost allocation and joint cost allocation.

More importantly, here’s what has remained the same:

  • CN’s four “financial efficiency” metrics remain in the Financial Health Rating formula, and less spending on the core functions administration and fund raising is still considered “better.”

  • Program expense growth, described as a measure of “financial capacity” is still part of the formula.

  • CN continues to weight each of the seven alleged financial health metrics equally in its rating formula.

As PIMG reported in our white paper, none of CN’s four “financial efficiency” metrics is a valid indicator of an organization’s financial health, and their use is regressive and detrimental to the nonprofit sector. (Yet, CN paradoxically signed a letter urging an end to the Overhead Myth in 2013!) Program Expenses Growth is similarly tangential to financial health, and can actually mask a toxic effect termed the Nonprofit Starvation Cycle. The equal weighting of all seven metrics is arbitrary and unjustified; this gives huge influence to the “efficiency” metrics, which are tangential at best, and harmful at worst.

Liabilities to assets ratio does provide insight into balance sheet strength, and is an incremental improvement. But consider that 5 of the 7 component metrics tell us nothing about actual financial health of an organization, and this noise constitutes 71% of the Financial Health Rating. Shockingly, the formula still lacks any measure of profitability, a widely accepted (and easily measured) indicator of financial health.

The net effect of CN’s changes is thus minimal. The Financial Health Rating is a muddled hodgepodge of financial data that tells us little of relevance. And CN still lacks a method for assessing comparative organizational effectiveness, which is really what's it's all about. The bottom line: CN’s charity ratings, as a whole, remain invalid.

(A host of others agree; see the discussion at the bottom of this NPQ article.)

How did this happen?

I can’t speak for Charity Navigator, but here’s a hypothesis:  

I once heard an intriguing idea expressed by a trainer in a group of nonprofit managers. “We become our clients,” she said. This could be why some social service agencies seem to be in crisis management mode much of the time, more than a few social justice organizations have bitter feuds with like-minded rivals, conservation advocates frequently duplicate services, and so on. 

CN’s clients are nonprofits, which typically make decisions by committee. Committees strive for consensus, have lots and lots of meetings, can make simple tasks convoluted through all this process, and often end up with watered-down, lowest common denominator decisions. CN made these incremental, ineffectual—yet oddly complicated—tweaks through a committee of "experts.”
 
After a big whiff, we’re about where we started. Again I pose a question that goes way beyond CN, which is just one player in a much larger drama:
 
What is it about nonprofit culture that makes our sector so accepting of “standards” that have little basis in evidence or rational thinking?

Instead of pondering that, I’m going back to something much more tangible—rethinking my imaginary baseball career.