Groundwire has a nice 2011 conference schedule roundup . . . worth checking out as you start to figure out your budget and plans.
Bright Ideas
Are Unrelated Business Ventures a Good Strategy?
The revenue diversification mantra runs far and wide in the nonprofit world, and the logic is usually intuitive: diverse revenue streams help organizations protect themselves from the dramatic revenue fluctuations that can be typical in a single source. An organization overly dependent on foundation dollars, for example, particularly if they are heavily reliant on one or a few large foundation grants, may really suffer when those foundations change priorities or lose a great deal of endowment value.
But the intuition is based largely on anecdote and extrapolation, and we’ve now got our first academic study interrogating those intuitions, specifically looking at “unrelated business income” ventures and their impact on organizational effectiveness and financial health. The Nonprofit Quarterly reported last week on the new study, which found that nonprofits running income-generating businesses tend to be less efficient in their mission work. The conclusions highlight what we think is a legitimate, predictable risk of “mission distraction” when groups maintain social enterprise side businesses or other unrelated business income ventures.
It’s not clear quite what the study results mean, though. It uses program expenditures as a proxy for organizational effectiveness, for one thing, which isn’t really a measure effectiveness at all. It’s much easier to measure inputs (program expenditures), of course, but what we should actually care about are the outputs (program outcomes). Whether intentional or not, the use of that proxy is predicated on the same logic that drives the “minimize overhead expenses” mantra, which we energetically resist. Program spending doesn’t tell us much about actual impact, whether we look at overall program spending or at program versus overhead spending. If you want to understand organizational effectiveness or impact, you have to look at the impact (or impact per dollar spent), not at spending alone.
Program expenditures as a proxy also misses effectiveness over the long-term. Are organizations with more diverse revenue streams more resilient over the long haul to fluctuations in individual revenue streams even if they seem less financial sound in any particular year? Does the experience of running what amounts to a private sector side business improve the business-running skills on the mission-oriented nonprofit side of the organization? Although the study found that organizations with business ventures unrelated to the mission were more likely to be experiencing financial distress (using a liabilities-to-assets ratio), the study doesn’t really tell us anything about whether those ventures are causing the distress, are caused by it, or are unrelated to the distress altogether.
And as the study’s author (Dr. Rebecca Tekula) speculates in the Nonprofit Quarterly blog comments, “nonprofits are seeing just as much difficulty with their income-generating activities as all corporations have during the last two years.” This highlights what I think is the most important question worth asking: under what circumstances do nonprofits successfully execute business ventures? I like the inquiry, and appreciate the study as a first stab on a complicated question, but I think the real value here is (hopefully) catalyzing the inquiry as opposed to any clear conclusions about when and how entrepreneurial ventures by nonprofits make sense.
Philanthropic Equity
The Fast Company blog had a nice piece a few days ago on what the Nonprofit Finance Fund Capital Partners call “philanthropic equity.” The basic premise of this approach involves using equity capital to fund nonprofit capacity building. Among the interesting angles: a nonprofit accounting system that tracks growth capital separately from general operating revenue. It doesn’t sound so radical but for most nonprofits, I’ll wager, it would dramatically shift the way they think about budgeting, accounting, and financial planning.
Nonprofit folks have heard for years about the importance of diversifying revenue streams, but outside of shifting from foundation dollars to donations and throwing in some events, I don’t have the sense that the nonprofit sector is pulling this off in any substantial way. Fast Company blogger Alice Korngold doesn’t talk much about what some of the creative approaches funded by the firm might look like. I think one of the challenges for this sort of effort to improve long-term financial viability for nonprofits is a lack of visibility for good examples of alternative revenue models. It’s not that the examples don’t exist, but they aren’t really part of the mainstream nonprofit conversation.
I suspect that the bigger challenges are cultural, though. Nonprofit folks often have a tough time conceptualizing revenue models that involve monetizing anything. It’s not a lack of imagination, but the norms of the sector itself – “We’re nonprofits . . . we don’t do that sort of thing” – can make it pretty hard to see some of the options. It’s not just the idea of monetizing that can be hard to get your head around, but accepting private investment capital and making large investments in long-term capacity (which looks a lot like what funders often call “excessive overhead”), well, the culture of nonprofits tends to resist rather than welcome.
One example of overcoming this resistance: A women’s leadership organization focused heavily on training recognized – thanks to a new board member with substantial corporate experience – that the market for their workshops was much larger than they had realized and willing to pay much than they were charging. They substantially increased their rates, generating a great deal more revenue while still fulfilling their mission of teaching leadership skills to women. To solve the access problem – making sure that women of all means could participate – they created new scholarship mechanisms, which turns out was easy to fund because of the increased revenue stream. This was a pretty radical shift, though, and I suspect that the conversations in the board meetings and among staff were challenging ones.
We’ve got a lot of work to do if we are serious about helping the nonprofit world think a little differently about how to approach the work of doing good.
The Monitor Institute’s Cool New Data Tool
The Monitor Institute has a new data visualization tool designed to help funders see relationships between their funding and grantmaking by other foundations but potentially useful for community planning applications. It’s a very cool tool – it offers clear, clear visualizations of multiple data layers, it’s easy to navigate, and they obviously put a lot of thought into the user interface. This could be a useful tool for mapping all of the organizations that work on a specific issue within a particular community, for instance, or for mapping the relationships between different issues in a community or regional planning process.
On first blush, there are two important elements I’d love to see on the next iteration. One is a Gapminder-type capacity to show change over time. I like that you can change the date range, so you can see the relative longer-term investments across sectors, etc., but you can’t see trends unless you manually change the date from one year to the next – clunky and not easy to do. The trends matter . . . a large investment this year in a subsector or by a particular foundation may mask a downward trend, which would be just as important to notice if you are trying to understand the relationships, seams, and opportunities. The other would be some capacity to show multiple dimensions at once, again like Gapminder does. This basically shows a single set of flat relationships on each screen. In your mind’s eye you can build a less-flat model of how each of the flat pieces relate to one another, but the tool doesn’t really help you do that.
The distribution model is also unclear to me. If their vision is a closed, proprietary system run through the Monitor Institute, well, that might be useful for whichever funders want to play ball, but it’s a lot less useful to the rest of us. If their vision is a stand-alone, self-contained tool, I can picture a lot of very useful ways organizations could use the tool to help them map their landscapes more effectively and more clearly through strategy.
And as the Philanthropy 2173 blog reminds us, with every data visualization tool you have to ask about the data themselves. Garbage-in-garbage-out is still the rule no matter how cool the visualization tool is.
Cross-posted on the PlaceMatters blog.
Why Will This Work?
The notion that a campaign or program strategy ought to be based on a coherent idea about why those activities will produce the desired results seems pretty straight-forward. Â In fancyfunderspeak, it’s often called a theory of change or a logic model. Â But there’s a debate simmering between the theory of change proponents, like Paul Brest of the Hewett Foundation, and the skeptics, like Albert Ruesga at the White Courtesy Telephone blog.
Brest’s basic argument (as he articulates in “The Power of Theories of Change”) is that it’s tough to expect donors to donate if the grantee can’t explain why their work is likely to lead to the desired social change. Â The skeptics’ rebuttal seems to be that this constitutes funder overreach and that requiring an explicit theory of change creates a great deal of anxiety and extra work without any real added benefit.
Ruesga, for instance, argues that a clear theory of change is implicit in most grant proposals (“Debunking Theories of Change” Part I and Part II). Â His worry is that requiring an explicit theory of change is likely to either confuse grantwriters or send them down rabbit holes of formal logic and dissertation-worthy socio-political models (and yes, in case you were wondering, he includes Carl Hempel citations and linked syllogisms to make his case). Â If that’s really what a funder is seeking, I’m likely to going to side with the skeptics. Â But plenty of grant proposals, in my experience, really aren’t built on a coherent rationale for why these activities might produce those results. Â Waving off a requirement for an explicit theory of change would be like dropping the expectation for a grant proposal to clearly describe the goals of a project or to outline how the grantee will evaluate success. Â If you are one of those funders who doesn’t care about evaluating grant outcomes, more power to you, but you ought not drop the requirement for reporting outcomes simply because you think most grant proposals already cover this. Â Sure, some grant proposals will include clear goals, a cogent theory of change, and an evaluation mechanism, but many don’t.
I think I get the skeptical instinct here. Â Funders can – and sometimes do – go way overboard. Â But asking prospective grantees to explain why they believe their work will succeed seems pretty reasonable to me.
Now, there is a much deeper – and I think much more interesting – debate lying barely beneath the surface here about how involved funders themselves should be in crafting the strategic vision around a movement or social change effort. Â Who is the steward of the strategy, and what does that mean for the relationship between funders and nonprofits, is a landscape-shaping discussion. Â The question has a sibling, or at least a cousin, that gets at the extent to which funders should focus on organization- and capacity-building versus actual social change campaigns.
And these two questions both beg other, even more difficult and more important questions, like why do so many nonprofits doing such important work allow themselves to become so dependent on such challenging funding sources. Â A foundation can ask for whatever the heck it wants, after all; if you don’t like their requirements then you can pitch elsewhere. Â If you find the expectations of the philanthropic community to be unreasonable, then you might be better off developing a different business model altogether.
As a donor, a funder, or as a potential campaign partner, for that matter, I care a lot that you’ve got a coherent rationale for why this particular suite of strategies is likely to produce the change we are targeting, or, if we are feeling more risk-tolerant and experimental, why we think these untested approaches are worth the effort. If we really are getting stuck on the idea that grant proposals should clearly explain why the organization believes its proposed efforts will produce the desired outcomes, well, that’s not encouraging.
And a hat tip to Tactical Philanthropy for the links.
Measure what Matters
It goes without saying that data helps people make decisions. When deciding how much to pay for a house you want to know the sales price of comparable homes in the neighborhood. When looking for a place to go out for dinner you might look at the average star rating on Yelp.com. This data gathering is tied to a personal goal: paying the right price for your new home and finding a yummy dinner.
But many nonprofits gather and analyze vast amounts of web analytics data that doesn’t help make decisions and isn’t tied to organizational goals. Reports are prepared that talk primarily about pageviews from one month to the next. Detailed reports may display pageviews for specific pages instead of the full site. You may see the number of visits that come via Google and perhaps the search terms used.
Possibly, year to year goals for websites may discuss increasing overall pageviews by 10% or some other amount.
What’s missing? Data that ties to outcomes so that actionable decisions can be made about content, design, search optimization, advertising and so on.
Often, what I don’t see is leaders that know how to ask the right questions about web analytics and related online strategies. It is taken on faith that rising pageviews means success. What’s missing are analytics tied to measurable outcomes driven by organizational goals. These are, as Avinas Kaushik calls them, “faith-based initiatives.” This is an unfortunate way to base resource decisions.
A smarter approach is to focus on Key Performance Indicators (KPIs), preferably those tied to program and organizational goals. This doesn’t need to mean an extra layer of data, analysis and review (aka more work for everyone). But it does mean agreement on asking the right questions and being willing to base tactical decisions on what the data is telling you. In other words, if your goal is to increase readership in California and your blog posts on California issues aren’t getting more pageviews then consider adjusting content.
Put another way, increasing web traffic has nothing to do (in and of itself) with your website meeting organizational goals. If 500,000 people a month view your site but nobody shares content, makes donations, comments on your posts, signs up for your email lists, buys your t-shirts, writes blog posts relating to your content or otherwise acts then are you any better off than you were when 50,000 people a month viewed the site and nobody did anything?
If you are better off then how do you know?
It’s that “how do you know” that counts, right? What are the actions that people are taking as a result of viewing and engaging with your content that matter and how do you measure those?
Perhaps a goal is to increase your reach in California or west coast in general because you are working with key members of congress on an issue that affects trade (or the environment or whatever) in these states. You can hone in on metrics for pageviews (preferably of a certain type of content) from that region. Are they rising during a key timespan? Better yet can you identify why these pageviews are rising and if the rise is tied to visits from AdWords campaigns, content in regional blogs to which you have contacted or another action intended to drive more traffic?
In this case a KPI is still as simple as pageviews but it is tied to reach, specifically your organizational reach in an area key to a program goal.
Another key performance indicator helpful to nonprofit organizations would be items in the category of “conversion.” Are your site visitors doing something after seeing your content that A) you can measure; and B) helps meet at least one of your goals. It is one thing to say that a blog post is getting 500 pageviews when similar posts used to get 300 page views. But it is rare that this tells you much about how (or even if) you are moving the needle towards changing policy (or raising money or building an engaged constituency).
To measure a conversion metric think about what you want people to do after visiting the page. Should they go to a next page with more detailed or related content (a lightweight conversion but an indicator of interest)? Should they subscribe to an email list? Should they share photos or a link on Facebook? Should they comment on the post? Should they make a donation or purchase an item?
If visiting a page is point A then what is point B? Most analytics tools will help you attach values to that step B and/or let you see user navigation paths.
Don’t be disappointed if conversion rates are low. Really low. Two percent can be considered a solid conversion rate. The reality is that people are looking information and not wanting to be “converted” to what you need. It’s like door to door canvassing. Tough but results count and you can learn from results and try to improve your technique. Test tweaks to design, page layout, headlines, related offers or content.
Storytime
Our job as an executive director or any other nonprofit community leader boils down to our skill as a storyteller. Our job, simply put, is to tell compelling stories.
A compelling story captures the imagination.  This is particularly obvious with donors. We have to offer a story about why our organization’s work is so important. Our storytelling has to make donors feel that they are part of that story, that they matter, and that their contributions will truly make a difference.
That it’s a story doesn’t mean it can’t be true; on the contrary, it needs to be both true and authentic or it probably won’t be very compelling.
And this is true with just about everyone else we interact with as well. Our volunteers are almost the same as your donors: they need to feel like they are valued and that their contributions matter to something really important that they care about. Our board and our staff all need to feel they are part of something vital. Every time we talk with a reporter, our storytelling skills are put to the test, just as they are when we build collaborations with partners and when we pleading our case before a judge.
Measuring Impact
Measuring the results of a nonprofit’s campaign can be really important to measuring and maximizing your impact, as Ted wrote a bit about yesterday (“Getting to Stories With Metrics”).  It can also be a tricky business. Aside from needing to overcome what is often pervasive resistance to the idea, and allocating the resources to do a good job, you also have to figure out what to assess and how to do it.
The easy way to pretend to measure the impact of a training program, for example, would be to report on the number of people who participate (yea, I’m picking on Ted a little for using this as an example in his post yesterday).  Although it might be useful for understanding the scope and reach of your program, alone it doesn’t tell you anything about how good the trainings are or how much impact the trainees themselves will subsequently have on your issues.
One national nonprofit we know goes to the trouble of assessing people before they begin the training and again three months later. The surveys are subtle and sophisticated – they don’t rely on simple self-reporting but probe in more oblique ways – and the result is a numeric rating for each individual indicating their level of activity and leadership. Their program evaluation, then, is based on the extent to which their trainees move up the scale as a result of the program. Perfect? Nope.  Simple? Not really, since it takes some real effort to conduct each assessment, and tracking folks down three months after the training can be time-consuming. Worthwhile? It sure seems to be, because it allows them to track the real impact of their program and to better finesse both the program itself and their marketing.