The Nonprofit Research Collaborative released its first report today, a survey of 2,349 charities, 163 foundations, and 386 grant makers on fundraising so far in 2010 compared to 2009. The NonProfitTrends blog has a good summary of the findings, but the punch lines seems to be this: like many other parts of the economy, fundraising in the nonprofit sector seems to have bottomed out and is starting its presumably long climb back up. We are a long way from the even four or five years ago but the trends generally point in the right direction. That’s great news, of course, but it’s not clear just how much rethinking – community engagement, governance, business models – the sector has done in the past few years, and I wonder if a steady diet of good (if incremental) news will diminish the opportunity and motivation to probe deep and ask hard questions about improving the capacity and effectiveness of the social sector.
Earlier in November, a coalition of outfits including the Communications Network, the Philanthropy Awareness Initiative, and the Williams Group published a really interesting report questioning the value of the annual reports produced by philanthrophic foundations: Talking to Ourselves? A Critical Look at Annual Reports in Foundation Communications. The good: the study was motivated by a terrific question (are annual reports worth the effort?) and they nailed some important insights. The bad: a little too much straw man in the study’s actual research questions, even though their conclusions all seem pretty sound. What do I mean? One of the key questions they asked: how many people actually read the annual reports published by foundations? The answer, predictably, is not many. They get points for not assessing readership among the entire American public, and instead measuring it among an “engaged public.” But this narrower “engaged public” audience was still made of up tens of millions of people, and it hardly seems fair or useful to judge recall or brand awareness of any particular foundation against an audience of that size. The issue seems to me like a classic communications challenge: identify precisely who your audience is and what you hope to accomplish, develop a strategy for accomplishing those specific outcomes with those specific audiences, measure effectiveness, and adjust. It’s tough to see why very many foundations, if any, would see their strategically targeted audience as including 12% of the American population.
Another example: the report assembles all of the annual report objectives noted by their survey respondents and then criticizes foundations for trying to do too much. I suspect that many foundations really do try to accomplish too many objectives with their annual report, but I might have approached this question a little differently.
That said, it requires some guts to to take on a deeply established practice, these folks did a lot of work in asking the question, and they a good starting a valuable conversation. One particularly interesting finding is the internal value of the annual report process that many foundations identified. Creating an annual report resulted in “significant internal benefits for their foundation, prompting a regular chance for reflection, creating a communications discipline, and generating new content that can be repurposed in other vehicles.”
Research question quibbles aside, the report does ask a tough, important question – is it worthwhile? – about a widespread, inertia-bound foundation practice, and their conclusions are spot-on: foundation communications tools should be targeted and strategic rather than broadband, foundations should measure impact of those tools, most people don’t find annual reports valuable, and skipping the annual report process can save a foundation substantial cash, time, and environmental impacts. And the report generated some good – and ongoing – online chatter, including Philantopic‘s comments and a post on the Public Policy Communicators of New York blog highlighting the upcoming “no-holds barred online conversation” about the report.
They clearly mean for their report to be the beginning of a conversation. Hats off to them for asking the question, doing the research, and inviting the engagement. You can learn more and plug in to the conversation at the WhyAnnualReports.org website.
The Harvard Business Review blog posted a couple of weeks ago on some findings from a massive ten-year study of management practices, and identified three in particular that seem most associated with successful mid-sized companies: a) ruthless monitoring and continuous improvements across their entire process, b) establishing challenging performance targets for their employees, and c) energetically incentivizing and rewarding high performers and weeding out underperformers. It’s interesting but not surprising that – as far as we can tell – not one of these practices is pervasive among nonprofits. It’s easy to come up with excuses for why these practices don’t easily apply to nonprofit organizations. It’s much harder, and I’ll wager a lot more useful, to figure how they do apply.
Among the hottest of the hot topics in the philanthropy world these days is nonprofit evaluation, and all eyes are on Charity Navigator in particular as they move from a narrow focus on financial measures to a much broader assessment of organizational effectiveness. This is good news for the sector, no doubt. We’ve long believed that a narrow focus on metrics like overhead ratios and program expenditures (core elements of Charity Navigator’s methodology) is not only misguided but really damaging, encouraging nonprofits to focus on how their expenditures look to external eyes rather than on how much they get actually do with those dollars, and creating huge disincentives for nonprofits to make critical capacity investments. Even worse, those metrics probably don’t tell us anything consistent about an organization’s actual impact or effectiveness.
As Tactical Philanthropy noted in a nice write-up earlier this week, “Charity Navigator is the 800lb in the charity rating space,” so watching them lay out a path for major changes in their rating system over the next couple of years really is a big deal. Charity Navigator, ironically, probably gets more credit than anyone else for making overhead ratios so important, but under new leadership and increasingly sophisticated thinking in the philanthropic world about evaluating nonprofit performance, they will play a large role in shifting the paradigm.
As they reported at the SOCAP10 conference last month, their new system will consider overhead ratios, working capital (cash on hand), and a liabilities-to-assets ratio but will also focus heavily on third party reviews of organizational effectiveness. Accountability and transparency will also figure in the rating system.
There’s plenty of opportunity for missteps, but watching an industry giant like Charity Navigator lead this critical cultural change in the philanthropy community is a welcome sight.
Do any nonprofit degree programs teach this? They should.
How to Manage Your Board:
- Guiding your board toward the questions you want them to ask and away from those you don’t want them to touch.
- How to make sure your board feels like it was their idea.
- Preventing your board from making really dumb decisions.
- Enabling your board to push and challenge you.
Groundwire has a nice 2011 conference schedule roundup . . . worth checking out as you start to figure out your budget and plans.
The Monitor Institute has a new data visualization tool designed to help funders see relationships between their funding and grantmaking by other foundations but potentially useful for community planning applications. It’s a very cool tool – it offers clear, clear visualizations of multiple data layers, it’s easy to navigate, and they obviously put a lot of thought into the user interface. This could be a useful tool for mapping all of the organizations that work on a specific issue within a particular community, for instance, or for mapping the relationships between different issues in a community or regional planning process.
On first blush, there are two important elements I’d love to see on the next iteration. One is a Gapminder-type capacity to show change over time. I like that you can change the date range, so you can see the relative longer-term investments across sectors, etc., but you can’t see trends unless you manually change the date from one year to the next – clunky and not easy to do. The trends matter . . . a large investment this year in a subsector or by a particular foundation may mask a downward trend, which would be just as important to notice if you are trying to understand the relationships, seams, and opportunities. The other would be some capacity to show multiple dimensions at once, again like Gapminder does. This basically shows a single set of flat relationships on each screen. In your mind’s eye you can build a less-flat model of how each of the flat pieces relate to one another, but the tool doesn’t really help you do that.
The distribution model is also unclear to me. If their vision is a closed, proprietary system run through the Monitor Institute, well, that might be useful for whichever funders want to play ball, but it’s a lot less useful to the rest of us. If their vision is a stand-alone, self-contained tool, I can picture a lot of very useful ways organizations could use the tool to help them map their landscapes more effectively and more clearly through strategy.
And as the Philanthropy 2173 blog reminds us, with every data visualization tool you have to ask about the data themselves. Garbage-in-garbage-out is still the rule no matter how cool the visualization tool is.
Cross-posted on the PlaceMatters blog.
The notion that a campaign or program strategy ought to be based on a coherent idea about why those activities will produce the desired results seems pretty straight-forward. In fancyfunderspeak, it’s often called a theory of change or a logic model. But there’s a debate simmering between the theory of change proponents, like Paul Brest of the Hewett Foundation, and the skeptics, like Albert Ruesga at the White Courtesy Telephone blog.
Brest’s basic argument (as he articulates in “The Power of Theories of Change”) is that it’s tough to expect donors to donate if the grantee can’t explain why their work is likely to lead to the desired social change. The skeptics’ rebuttal seems to be that this constitutes funder overreach and that requiring an explicit theory of change creates a great deal of anxiety and extra work without any real added benefit.
Ruesga, for instance, argues that a clear theory of change is implicit in most grant proposals (“Debunking Theories of Change” Part I and Part II). His worry is that requiring an explicit theory of change is likely to either confuse grantwriters or send them down rabbit holes of formal logic and dissertation-worthy socio-political models (and yes, in case you were wondering, he includes Carl Hempel citations and linked syllogisms to make his case). If that’s really what a funder is seeking, I’m likely to going to side with the skeptics. But plenty of grant proposals, in my experience, really aren’t built on a coherent rationale for why these activities might produce those results. Waving off a requirement for an explicit theory of change would be like dropping the expectation for a grant proposal to clearly describe the goals of a project or to outline how the grantee will evaluate success. If you are one of those funders who doesn’t care about evaluating grant outcomes, more power to you, but you ought not drop the requirement for reporting outcomes simply because you think most grant proposals already cover this. Sure, some grant proposals will include clear goals, a cogent theory of change, and an evaluation mechanism, but many don’t.
I think I get the skeptical instinct here. Funders can – and sometimes do – go way overboard. But asking prospective grantees to explain why they believe their work will succeed seems pretty reasonable to me.
Now, there is a much deeper – and I think much more interesting – debate lying barely beneath the surface here about how involved funders themselves should be in crafting the strategic vision around a movement or social change effort. Who is the steward of the strategy, and what does that mean for the relationship between funders and nonprofits, is a landscape-shaping discussion. The question has a sibling, or at least a cousin, that gets at the extent to which funders should focus on organization- and capacity-building versus actual social change campaigns.
And these two questions both beg other, even more difficult and more important questions, like why do so many nonprofits doing such important work allow themselves to become so dependent on such challenging funding sources. A foundation can ask for whatever the heck it wants, after all; if you don’t like their requirements then you can pitch elsewhere. If you find the expectations of the philanthropic community to be unreasonable, then you might be better off developing a different business model altogether.
As a donor, a funder, or as a potential campaign partner, for that matter, I care a lot that you’ve got a coherent rationale for why this particular suite of strategies is likely to produce the change we are targeting, or, if we are feeling more risk-tolerant and experimental, why we think these untested approaches are worth the effort. If we really are getting stuck on the idea that grant proposals should clearly explain why the organization believes its proposed efforts will produce the desired outcomes, well, that’s not encouraging.
And a hat tip to Tactical Philanthropy for the links.