Listening (to the right stuff)

Over the past few weeks, I’ve had the opportunity to work with Upwell, one of the most important and inspiring around these days.

Upwell works for the ocean. They do that with “Big Listening” (more on that later) to track the global conversation about oceans. They do it with minimum viable campaigns – lean tests of what works (and doesn’t) to change and direct the ocean conversations.

And they do it by sharing everything they learn with advocates, organizations, media, scientists and everyone else that cares about oceans (which should be each of you because, you know, the world’s surface is over 70% water and that gives oceans a big leg up on the global power chart).

Clyde listens

Tracking the global conversation about oceans (or climate change or voting rights or organic agriculture or anything you can imagine) has never been more important. The information each of us gets (or can easily find) is no longer controlled by a community (or national) newspaper or TV station and its editorial board.
[Read more...]

Should grantmakers act more like venture capitalists?

Should philanthropic foundation board members and staff act more like the venture capitalists who fund internet startups?

That’s the question our good friend Jon Stahl posed a few weeks ago. Jon’s focus was on the high level of involvement that venture capitalists often have with the companies they invest in. Lead investors typically have a seat on the board and often participate actively in the company, at least at the strategic level. Jon points out that foundation program officers, with portfolios that often run in the dozens, simply don’t have the bandwidth to engage much with their grantees.

I think it’s a great point; maybe there are ways we could refine the philanthropy model to offer grantees more support from their funders.

But the venture capital investment model has some other qualities that may or may not fit our social sector goals very well. For one thing, the VC model is designed to foster blowout success at the expense of everything else. In financial terms, a 2x ($2 returned for every 1$ invested) or even 5x return isn’t very interesting; the VC model is designed to produce 10x and 100x or even larger returns.

In fact, VCs have a lot of incentive to actually kill companies in their portfolio that don’t knock it out of the park. You probably won’t get funded in the first place unless you’ve got a great idea, a great team, and a great market, but if you don’t show aggressive growth in users or revenue pretty quickly, and then sustain that growth, the odds are decent that your VC will actually be part of shutting you down. A typical venture fund might see half or more of its companies fail outright, thirty percent performing modestly enough that the fund can get its investment back or perhaps make a small return, and only twenty percent doing really well. (The actual numbers are tough to come by, and there’s a lot of disagreement about exactly what they are, but we know that the huge hits are pretty rare and that lots of venture capital funds actually lose money).

The model might make sense on issues where our most desperate need is for a few blowout successes (and where we are comfortable killing off the groups that don’t achieve this level of success). For example, it might be perfectly reasonable for the Gates Foundation to fund malaria eradication programs using a VC-style approach, hoping that one of their high-risk-high-reward investments comes up with the solution we’ve all been waiting for.

But on lots of social sector issues, activists and funders are happy – and reasonably so – with moderate, sustained success. If a VC-style approach on malaria eradication comes at the cost of stable, sustained funding for effective malaria prevention efforts, it’s probably a much less appealing strategy. In fact, those “moderate” successes only look modest by comparison to absurdly high Google-style returns.

And on many issues there probably just isn’t a knockout punch waiting to be uncovered through high-risk entrepreneurial style investment by philanthropic donors. Preventing extinction and recovering endangered species is just hard work, politically and ecologically; there almost certainly isn’t a fantastically successful strategy just waiting to be discovered. We ought to have more sophisticated ways of measuring outcomes, and more effective ways of rewarding nonprofits that craft and implement successful strategies, but success across lots of fields won’t look like the 1,000x return that early Facebook investors walked away with. There may be some radical advocacy innovations waiting to be uncovered, but odds are good that most of our success will come through philanthropic investments with returns that look more like the equivalent of 2x, 5x, and 10x outcomes in the investment world. And even though these numbers look small compared to the superhits, they are still huge success: anytime a foundation invests $50,000 in a nonprofit and gets $100,000 or $250,000 worth of social change value out of the deal we all ought to celebrate.

The VC model also shifts enormous control over the company itself to the investors. It’s one thing for a social sector funder to have detailed expectations about how their grant will be spent, and perhaps to use the size of their grants to influence organizational decisions about staffing and strategy (which itself is enough to make many nonprofits very uncomfortable). It’s something altogether different when the funders actually control the organization itself.

Finally, the idea that funders might play a more active role in managing the organizations they fund carries as many risks as it does benefits. The best program officers offer real expertise about the issues they fund, they can draw on wide experience working with the nonprofits they fund, and can offer a higher-level strategic vantage precisely because they aren’t in the trenches on a day-to-day basis. But even the best are still at a distance from the day-to-day work, they often don’t have much experience on the other side of the funding equation, and they can be very prone to a favorable results bias.

In fact, while investors and entrepreneurs may not (and often don’t) share the same long-term vision, they measure results in a very consistent way: how much money is this company earning and how much is it worth. Philanthropic funders and the nonprofits they support may tend to have better alignment on long-term vision, but they rarely share a consistent and unambiguous approach to measuring outcomes. And this problem is only amplified by the strange power dynamics that characterize most grantmaker-grantee relationship. Deeper involvement by program officers in the nonprofits they fund comes with some real challenges.

I’m guessing the appeal of the VC model for Jon is mostly around the opportunities for nonprofit folks to learn from the experience and vantage of the funders they work with (not to mention the potential for funders to provide other kinds of resources to their grantees), and given how weak nonprofits usually are mentoring and professional development this makes a lot of sense. The trick, as is usually the case when drawing from outside models, is making sure we understand what those external models are designed to do and adjust the ways we mimic and poach from them accordingly.

There are other models worth exploring, as well. Angel investors often contribute much smaller amounts but expect much lower returns, which means that a moderate success can still be a success, and the angel investment model includes a lot of room for investor involvement and support. Crowdsourced funding models, with Kickstarter as a marquee example, might offer some insights. In many ways these models look a lot like traditional membership-oriented fundraising in the nonprofit world, but as federal law expands accessibility to true crowdsourced investment we can expect to see rapid evolution in the mechanics and structure.

I agree with Jon’s basic point that we should look at the venture capital model for ideas about improving philanthropic funding. I do think, however, that the VC model in particular has some significant limitations in a social sector context. The nonprofit world, at times, goes overboard when it pulls from other sectors, missing the nuance and context and overdeveloping some particular element that seems important. But we can learn a lot, too, by paying attention to other sectors, and we’ve got a lot to gain by poaching, adapting, and testing whatever we think might help.

Doubling Attendance in One Year: A Success Story

Santa Cruz Museum of Art & History attendance numbers.

I’m an unabashed Nina Simon fan, and I love this post on her Museum 2.0 blog about their growth in visitor numbers, how they pulled off the impressive growth she describes, and their plans for next year. This is the type of candid, under the hood, here’s-what-we-did-what-worked-and-what-didn’t writing that I think we need much more of in the nonprofit world.

The “five great ways to do something” lists (guilty), the “a great example of doing it wrong” posts (guilty), the big picture trends stories (guilty) … all of these can be useful, but often I find the posts that lay it all out there – good and bad, lessons learned, what they’re going to try next – to be the most helpful. There isn’t anything else like it: real social sector folks describing concretely and candidly what they actually did and what they learned.

We blogged about another of Nina’s terrific ‘lessons learned’ posts back in May in case you missed it (“Year One as a Museum Director … Survived!”).

Data Informed, Not Data Driven

This Adam Mosseri talk about how Facebook uses data to make decisions is a little dated but his observations are still extremely useful. His key insight: clear metrics and strong data-driven feedback loops can be powerful, but they have their limits as well. Facebook often uses solid empirical data to make decisions about their website design, their products, and the workflows that users experience on Facebook. They can test two versions of a website design, for example, and if design option A produces higher engagement than design option B it’s an easy choice.

But Mosseri also explains how an excessive fidelity to data-driven decisions can privilege incremental and uninspired changes at the expense of innovation and ambitious thinking. Facebook sometimes is aiming not only for high levels of engagement but for more fundamental changes in the way people interact with it and with each other. Facebook’s Timeline, for instance, inspired anger and fierce resistance among many Facebook users and sharp derision from the press, and the use of a conventional data-driven decision process would have killed it before it got very far, but Timeline is now a central and deeply-valued part of the Facebook experience.

Most nonprofits don’t seem to rely much on data for their decision-making about their websites, email newsletters, programs, and fundraising efforts, and when they do those efforts aren’t often carefully crafted and executed (some do, of course, but for every one that does there are many, many more that don’t). The remedy isn’t to swap all the intuitive and qualitative decision-making for analytic feedback loops, but to find a good balance. “Data informed, not data driven,” as Mosseri says.

Measurement and support for community, not you

Want to stir the pot amongst social media campaigners and their managers? Start a conversation with them (preferably when you’re all in the same room) about how they measure social media efforts. “Tell me,” for instance, “how you show the ROI of this work.”

Measuring social mediaPerhaps you’ll enter a coherent dialog on social media ROI across the organization (though we doubt it). If so, chances are good that the metrics discussed will be things like number of fans/followers/likes, number of comments, “people listening,” retweets and shares. You may get more programmatic correlations such as amount of money raised through Facebook or people that clicked a Pinterest link, came to the website and subscribed to the email list.

These numbers, however, say little about the value our work is adding to the life of the person at the other end of that like.

Most metrics are about us

The thing is, our social media metrics (heck, even our email and web metrics) are almost entirely about us, the organization. We assess our value and power by the number of fans, followers, subscribers as well as letters signed and cash in the door. And this informs our resource planning, staffing, program evaluation.

This is not terrible (at least not the cash in the door part).  These are informative data points if used in context.

But these are one-way relationship measurements. It’s as though we’re just in the business of selling shirts and all that matters is getting more customers in the door so we can sell more shirts.

But if I’m interested in sticking around as a company I really want to know what people think of the my shirts. How did it fit? Did it shrink in the wash? Did it fall apart?  Do you love it? Will you buy another one? Has it done the job?

How do nonprofits measure the value they are bringing to people’s lives? How do we get beyond discussions of tactics for getting more likes, retweets and impressions and move to learning about what is impacting people and creating the power to change communities? 

We must be able to clearly state how what we do relates to people’s lives. We need to understand precisely how our work matters to people before we can measure how we have helped people change their lives.

If you provide a direct service such as meals to the elderly, job training, or a bed for the homeless then you can measure the amount of such service provided. When it comes to social media, look for measures that tie your use of social media as closely as possible to that service. How many people knew about or took advantage of help based on social media? Look at social media metrics but also at client data. How did your organization’s use of social media affect use of services by your clients or audience?

Organizations that provide primarily advocacy services have a trickier time measuring benefit to audience and as a result use primarily indirect metrics. They infer from likes and shares that the audience is valuing (or not) their social media. These organizations, too, should directly and regularly query followers/supporters about the impact of social media on their actions and views.

Culture of community support

But advocacy organizations could also more directly seek guidance from social media about what advocates need, want, could use to help them be more effective. Social media (and email and the web) is an opportunity to have a direct conversation with the people that matter most to your work (and, no, we’re not talking about legislators or even your staff): your members, donors and activists.

Here are some guiding principles for helping your community:

  • Be deliberate about asking them what they need to be better advocates;
  • Provide what they ask for and test it, measure results and share feedback;
  • Be transparent with your community: share your intent and learning openly;
  • Identify and track people as they become more engaged (as well as the actions that they take to get there); and
  • Create a culture that encourages sharing advocacy stories, not just rants or odes of support. Instead of “great job” or “this guy stinks” we should strive to hear “this is what I did and here’s what happened.” (And if/when we get those stories, thank the people that share them.)

Focusing on how the community can become better advocates and supporters of one another will build and spread power, create longer lasting change, and take advantage of the interactive nature of current communications channels.

The Pitfalls of A/B Testing and Benchmarking

Improvement begins with measurement, but the ruler can also limit your audacity to try wildly new approaches (photo by Flicker user Thomas Favre-Bulle).

Google is famous for, among other things, crafting a deep, rich culture of A/B testing, the process of comparing the performance of two versions of a web site (or some other output) that differ in a single respect.

The benefit: changes to a web site or some other user interface are governed by real-world user behavior. If you can determine that your email newsletter signup button performs better with the label “Don’t Miss Out” instead of “Subscribe,” well, that’s an easy design change to make.

The practice of benchmarking – using industry standards or averages as a point of comparison for your own performance – has some strong similarities to A/B testing. It’s an analytic tool that helps frame and drive performance-based testing and iteration. The comparison of your organization’s performance to industry benchmarks (e.g., email open rates, average donation value on a fundraising drive) provides the basis for a feedback loop.

The two practices – A/B testing and benchmarking – share a hazard, however. Because a culture of A/B testing is driven by real-time empirical results, and because it generally depends on comparisons between two options that are identical in every respect but one (the discrete element that you are testing), it privileges modest, incremental changes at the expense of audacious leaps.

To use a now-classic business comparison: while Google lives and breathes A/B testing, and constantly refines its way to small performance improvements, the Steve Jobs-era Apple eschewed consumer testing, assuming (with considerable success) that the consumer doesn’t know what it wants and actually requires an audacious company like Apple to redefine product categories altogether.

Similarly, if your point of reference is a collection of industry standards, you are more likely to aim for and be satisfied with performance that meets those standards. The industry benchmarks, like the incremental change model that undergirds A/B testing, may actually constrain your creativity and ambitiousness, impeding your ability to think audaciously about accomplishing something fundamentally different than the other players in your ecosystem, or accomplishing your goals in a profoundly different way.

The implication isn’t that you should steer clear of A/B testing or benchmarking. Both are powerful tools that can help nonprofits focus, refine, and learn more quickly. But you should be aware of the hazards, and make sure even as you improve your iterative cycles you are also protecting your ability to think big and think different about the work your organization does.

And if you want to dive in, there are a ton of great resources on the web, including a series of posts on A/B testing by the 37Signals guys (Part 1, Part 2, and Part 3), the “Ultimate Guide to A/B Testing” on SmashingMagazine, an A/B testing primer on A List Apart, Beth Kanter’s explanation of benchmarking, and the 2012 Nonprofit Social Network Report.

Our First Book Launch: The Nimble Nonprofit Hits the Streets (and Barnes & Noble)

The Nimble Nonprofit is now available at Barnes & Noble ($4.99)!

Yesterday Trey and I launched our first book, The Nimble Nonprofit: An Unconventional Guide to Sustaining and Growing Your Nonprofit, with a ton of help from our Bright+3 colleague Ted Fickes.

We’re only a day into it, but it’s been great fun so far: a ton of awesome reviews on Amazon, a bunch of great Twitter traffic, and even an unsolicited and really favorable full-on book review (thanks Bonnie Cranmer!).

In addition, I now have a “Jacob Smith” author page on Amazon. I wasn’t expecting much when I logged in to set it up, but I must not have paid author pages much attention previously because it turns out they’re actually set up pretty well. In addition to what you’d expect (profile, photo, etc.), they also allow you to bring in a Twitter feed and an RSS feed, which is a nice touch.

And great news if you are a Nook fan: The Nimble Nonprofit is now available at Barnes & Noble!

The book is in review at Apple, and as soon as it launches there we’ll announce it.

We’re thrilled to sent our little book out into the world, and we welcome your comments, critiques, and thoughts … send them our way:

  • email: authors@nimblenonprofit.com
  • Twitter: #nimblenpo
  • web: http://brightplus3.com/

The First Bright+3 Book Launch: The Nimble Nonprofit

I am thrilled to announce the launch of The Nimble Nonprofit: An Unconventional Guide to Sustaining and Growing Your Nonprofit.

The nonprofit world truly is in a state of flux. Much of what used to work doesn’t anymore. The need to invest in growing ass-kicking staff and to develop sustained organizational capacity has never been greater, yet the difficulties of doing so are growing as quickly as the need. In The Nimble Nonprofit we cover a wide range of what we believe are critical challenges facing the nonprofit sector:

  • cultivating a high-impact innovative organizational culture;
  • building and sustaining a great team;
  • staying focused and productive;
  • optimizing your board of directors;
  • creating lasting relationships with foundations, donors, and members;
  • remaining agile and open; and
  • growing and sustaining a nimble, impactful organization.

We mean for The Nimble Nonprofit to be a guide – an unconventional irreverent, and pragmatic guide – to succeeding in a nonprofit leadership role, and to tackling this incredibly challenging nonprofit environment. We aimed for a conversational, practical, candid, and quick read instead of a deep dive. If you want to immerse yourself in building a great membership program, or recruiting board members, or writing by-laws, there are plenty of books that cover the terrain (and some of them are quite good).

But if you want the no-nonsense, convention-challenging, clutter-cutting guide to the info you really, really need to know about sustaining and growing a nonprofit, well, we hope you’ll check out The Nimble Nonprofit.

This is our first book, and the publishing industry is a state of disarray, so – following the spirit in which we wrote the book – we are taking an unconventional path. We decided to publish strictly as an e-book, and we decided to self-published (with a bunch of help from Ted here at Bright+3). We are offering the book through the big three e-bookstores (Amazon, Apple, and Barnes & Noble, and we might add a few more to the mix), and we’ve priced the book at $4.99, which is much less expensive than the vast array of other nonprofit books.

As of right now, the book is available on Amazon (and it’ll hit the other two stores shortly). If you’d like to score a copy of The Nimble Nonprofit and enjoy reading it on your Kindle, iPad, or another tablet, jump on Amazon and grab it (did I mention it’s only $4.99?).

And, because our main goal is contributing to the conversations around these critical questions, we are also making a .pdf version of the book available for free.

We suspect that most readers will agree with some of what we argue and disagree with other parts, and because we challenge much of the conventional wisdom about building strong nonprofits, we’re pretty sure that some folks will disagree with a lot of what we write. And we look forward to the conversations. Please send us your thoughts, critiques, comments, and ideas

  • email: authors@nimblenonprofit.com
  • Twitter: #nimblenpo
  • web: http://brightplus3.com/

Tell us where you think we’re wrong and where we’ve hit the nail on the head, and please share with us other examples of nonprofits doing a great job of tackling these challenges and where they are just getting it wrong.

Happy reading -

Jacob

(P.S. The Nimble Nonprofit is available right now on Amazon.)

Building email lists one opt-in address at a time

How organizations build email lists is no small issue. Many groups are investing significant resources in staff, consulting, advertising, events and vendor contracts (in particular partnering with Care2, Change.org and similar communities) to increase their list size.

Open rates of opt-in and opt-out messages

Comparing open rates of opt-in and opt-out subscribers. Source: JeanneJennings.com via ClickZ.

An issue that often comes up as these programs take off is do we use “opt-in” or “opt-out?” What does this mean? Opt-in means that a new subscriber must make a proactive decision to join an email list by clicking a checkbox, filling out a form or, in the case of double opt-in by replying to a confirmation email and essentially telling you twice that they want to be on your email list. Opt-out happens when someone is added to an email list without clear prior acknowledgment and must actively opt-out if they want off the list.

We were intrigued the other day to come across a post on ClickZ looking at the results of over 300 million emails sent to sets of subscribers added to lists via opt-in and opt-out methods. Email professionals will generally discourage organizations from using opt-out methods (though, as discussed below) typical subscription practices aren’t far from opt-out and many organizations use opt-out frequently through email appends. And opt-out is pretty much assumed in political campaign marketing where lists are bought, sold, traded and given away all the time.

Opt-out is cheaper than opt-in. You may put money up-front to rent/buy an email list or run an email append against your mailing list or other house file. But dollar for dollar it costs less than opt-in. Generally, the more subscribers need to do to indicate their interest in subscribing the less likely they are to subscribe.
[Read more...]

Facebook is giving people what they want (engaging content, that is)

Giving people what they want. This is how I would sum up news from EdgeRank Checker that Facebook user engagement is at least 70% lower with posts to Facebook through 3rd party applications (like Hootsuite). We’re not going to get into details on the methodology of the report. Check the original post for that. Allyson Kapin does a great job running through the report and its implications over on Frogloop.

Posting to Facebook with 3rd party applications lowers engagement

Infographic looking at lower Facebook engagement on posts from third party applications.

The idea that posts to Facebook from third party applications get less visibility on Facebook is not new. But this study is the most conclusive look yet at real data. What we want to look at here is what’s the moral of this story for organizations.

What you, as a Facebook user, see on Facebook is not simply a chronological stream of everything posted by every one of your friends and the pages you have liked.

When you visit Facebook you are seeing what Facebook shows you. It’s their network, after all. And what they are showing you is what they think you are most likely to be interested in reading as judged through past likes, comments, wall posts and tags.

Most people have figured out that there is at least some difference between “Top News” and “Most Recent” Facebook streams (though if you have figured out why you see what you do on the Facebook mobile app let us know – that one seems inexplicable at times).

Facebook wants you to see Top News because it believes you will be more likely to be interested and stay on the page. Facebook wants you to be happy.

This, friends, is the world presented to you through an algorithmic filter. Facebook figured out that people are happier seeing stuff they like. [Read more...]