Content is a forest. Don’t just count the trees.

One piece of content strategy is knowing why, when, how (and if) you need to post your content in multiple places. That seems like a lot of extra work. A version of this topic about this popped up last week in a Slack community for nonprofit/NGO folks with which I’m involved. Someone posted this question:

Does anyone have any useful insight for blogging? I’m specifically looking at posting to our native website and cross-posting to other sites – Medium, Linkedin, WordPress, etc. Obviously each has their own strengths but it feels like overkill to post to all.

I love this. Where to put content, when, why, and what it should look like come up in every comms and digital program – regardless of whether or not there’s a clear content strategy. Every organization sorts this out. And the good ones ask the question over and over again. Talking through it presents a great opportunity to dig into about content strategy, staffing, planning, editorial style, marketing and more.

Isn’t putting our content in more than one place just extra work?

Pushing content into multiple channels is probably already happening. Your blog posts, articles, reports, and action alerts are all finding their way into social media posts.

A blog post you write today probably also has an accompanying Facebook post with a headline, text and photo optimized for Facebook engagement. It has a 200 or so character tweet and photo. Maybe it also has a one minute video – an interview with a staff member about the story that can go on YouTube or Instagram.

Doing this much is almost taken for granted. You want to raise awareness of the post, drive clicks to your site and so you create little versions of the story that entice people to click through to find out more.

Measure the forest, not just the trees.

Most of us focus on one featured piece of content – usually a blog post or other page on a website. We’re constantly planting new trees in our content forest. We care for each tree – at least for a day or two – by telling everyone “hey, go look at the tree.” We measure page views and Facebook likes, Instagram followers and retweets.

We’re often answering the “should I also put our content over there” using a cost-benefit equation that can’t be defined. Of course, we’re going to create the main post or piece of content. We have to do that. (You have to have at least one tree, right?)

How do we know if a tree on the website, on Medium, or on LinkedIn is worth it?

What if we could measure the value of the forest instead of each tree? We know a healthy forest needs different kinds of trees. Some live. Some don’t.

Some trees serve as home for squirrels and birds. Others produce twigs eaten by deer. Some create shade the keeps things cool and others drop leaves that replenish the forest floor.

Each person interacting with a story or piece of content (a tree) is getting something special from it. We just don’t have great ways of measuring individual value. But if someone important to us gets all their value from a Facebook tree then we better make sure that all the content they need is on Facebook. Other people might be email newsletter and Facebook consumers. Others get their nutrition from Medium. And maybe a little ego-soothing LinkedIn first thing in the morning.

A thriving forest is alive, evolving and growing. So is your content.

There’s no one way to care for a healthy forest. And what works today may not be worth doing a year from now. Know how people engage with content. Don’t just optimize the website for stickiness or assume you can create great Facebook posts that get people to go read the full article. Consider the people who spend most of their time in Facebook and make sure they get what you need them to get while there. If you can show that your people are on Medium then don’t look that as extra work, look at it as necessary and do it well.

Review your approach regularly. Don’t be afraid to shift gears, test, put more time into one part of the forest for a while.

Measuring the forest.

Figure out how to measure for the forest, not the individual trees. Don’t rely on page views, clicks, opens and raw audience size. That’s all great stuff. Do measure it. But don’t base your decisions about how to spend your time on it.

Here’s an idea: ask people qualitative questions about your content and it’s impact on their work, conversations with family or friends, their ability to take meaningful action. Ask them in January. Then ask those same people again in May and October. Do they recall content? Do they remember where they found it? Did they take an action or make a donation as a result? Did they change their own economic or political behavior? Did they send it to someone? How and why?

Many of these actions don’t happen at grandiose scales. The numbers may not wow you but tangible measures of action, empathy and engagement can be the difference between content that’s distributed and content that has impact. And that’s a helpful number to pin down when trying to define the difference between content strategy and content production.

Do you create content, measure it and still have no idea if content matters?

You are not alone if you answered yes (or even maybe/I’m not sure) to the question in the headline above. There is a disconnect along the creation, measurement, impact and learning path when it comes to content.

We set up The Content Survey to help you better understand how to develop and measure content that drives social change. Here are some preliminary results (and check out the slides below, too).

Content Survey - Preliminary Results

Who Took the Content Survey

67 organizations participated in the survey. Of that group…

  • 30 are small groups with under 20 staff.
  • 9 are mid-sized organizations with 20 – 50 staff members.
  • 11 are large groups of 51 – 100 staff.
  • 17 are very large organizations with over 100 staff.

The leading ways in which individuals saw their organization achieving its mission were direct advocacy and education. Groups also use research, community organizing, policy making and community service to achieve their mission.

There’s no clear correlation between organization size and having a written content strategy. Twenty-three percent of small groups have a content strategy — same as large organizations.

Overall, only one in four groups report having a content strategy.

Does Content Strategy = More Powerful Content?

Continue reading “Do you create content, measure it and still have no idea if content matters?”

Take The Content Survey

Chances are, if you’re working in a nonprofit or campaign you’re spending a good chunk of time writing, editing, shooting photos and video, or maybe commenting on ad copy or Facebook post language. Maybe you’re putting together language for the next email newsletter, activist alert or infographic for that new report your research team put together.

The Content Survey
Click to take The Content Survey.

All that content we’re putting out in the world are the atoms of advocacy — they’re the bits and pieces that form the dots, build the networks and create change.

Take The Content Survey
from Bright+3 and Echo & Co.

But are we able to make sense of how all the blog posts, reports, emails, videos, infographics and (of course) clever animated gifs are advancing our work? That’s a question we’re continually wrestling with in our research and client work.

We wanted to dig a little deeper and last month released a small query for the community — The (mini) Content Survey. It was a (very) brief questionnaire to get a sense of how effective people felt their content was and how they assess that effectiveness. The most valuable information came as open-ended responses to the question “What’s the one thing you wish you knew about your content (but don’t)?”

Are we smart enough about psychology to have the right context when creating content?

And another:

The extent to which people are learning from content. It’s hard to measure the impact of “educational” content.

Most boiled down to how we know if content is motivating action:

What is the reader’s emotional reaction and what does that move them to do?

These are difficult metrics to gather and evaluate (but it can be done). The question is – do these metrics help (and do we have the resources to learn and act on them)?

Now we’re taking The Content Survey a step further. We teamed up with Echo & Co. to launch and analyze a more complete set of questions – though just 10 questions in all. Next month, we’ll begin reporting back on our findings to the community at NTC in Austin (and for anyone who asks and wants to learn more).

To be clear, we’re looking for feedback from anyone involved in the content creation process so if you’re a digital strategist, fundraiser, organizer or leader, please take a few minutes to take the survey here:

The Content Survey

We’d love it if you would take The Content Survey. Ten questions. Thanks!

What DO you know about your content?

Content is everywhere. For starters: blog and Facebook posts, tweets and webinars, infographics and Slideshare presentations, online and print advertisements, email newsletters, action alerts and fundraising appeals, photo galleries and research reports, annual reports, magazines and books. That’s just a start.

And let’s not forget the dozens or hundreds (or sometimes thousands) more web pages that tell people who you are, how you work, why you do what you do and (super important!) why the reader should support you with their own time and money.

Meanwhile, these readers (donors, supporters, activists, media, legislators and others), are (we hope) relentlessly reading, sharing and taking action with through constant connections to smartphones and tablets.

What’s missing? A clear sense of what works, when, why and a plan for how to move to that spot. When we talk with executive directors, fundraisers, communicators and digital strategists, all report that their organizations are spending more time and money than ever on content. They wonder if they’re doing the right thing (and how to tell).

Content Sur
What works and why? We’re finding out, starting with the (mini) content survey.
Take it now.

What Works and Why?

It’s time to start answering that question. We’re starting with a survey (two surveys, really) that brings together data from across the nonprofit sector about content spending, staffing, strategy as well as goals, methods and metrics. We’ll also be talking with leaders and practitioners to collect stories that give context to the data. When we find patterns in the data that helps explain success (or failure) we’ll dive deeper and hear from people doing the work.

Wait. Two surveys? Sure. The first is a mini-survey. Just three questions to help scratch the surface and identify some key questions YOU (not just us) have.

A survey, after all, should be about what the user needs. Just like great content.

Take the (mini) content survey now. Thanks!

Pot and Obamacare beat out real conversations about health in Colorado

The annual Colorado Health Symposium kicked off earlier today. The Symposium has become the main gathering of people in the state and region working on a wide range of health issues (and it’s likely a big event on the national scene). Colorado Health Foundation staff organizing the event do a great job using YouTube, the web, Twitter and Facebook to engage people during the event. The content is great. Follow along on Twitter at #14CHS.

The theme of this year’s Colorado Health Symposium is Health Transformed: The Power of Engagement. Today’s discussions about how to engage people in health conversations talked about meeting people where they are, communicating on their terms and how to use language that fits the community you’re trying to reach.

That’s a great place to focus attention. Advocates that don’t engage their audience aren’t doing their job.

But it got me to thinking about how people are talking about health in Colorado now (and how much they’re talking about it). What ARE people talking about when they talk about health in Colorado? What else are they talking about? What can we learn about the state of public engagement on health by the looking at recent online conversations?

The chart below uses Topsy to analyze the use of three sets of keywords on Twitter in the past month: (1) tweets that have the words “health” and “Colorado” in them; (2) tweets that have the words “healthcare” and “Colorado” in them; and (3) tweets that have the words “pot” and “Colorado” in them.

The “colorado health” conversation is dwarfed by mentions of pot. Talk of health in the state is dominated by debates over Obamacare.

None of these are huge conversations over the past month and, clearly, the conversation about each of the topics represented by these terms is bigger than the numbers in this chart. We’re only looking for these specific words, after all. And we’re only looking at Twitter in the past month. This is just one snapshot, not an extensive analysis. Hop on Topsy to play with these or other terms.

But the chart is telling. The biggest “spike” in Colorado health conversation happened on July 20th as the result of a Denver Post story about billing issues with healthcare plans sold on the Colorado exchange. This story has little to do with health but is instead tied to the continual political debate over Obamacare/the Affordable Care Act. It’s probably no surprise to anyone that politically charged conversations about health insurance laws displace actual health conversation.

It bears further analysis but what’s potentially concerning is that health conversation – and the ability of the health community to engage real people about health issues online – is being confused and displaced by the healthcare debate. A few thoughts on what this might mean:

  • Online channels certainly aren’t the only (and maybe not the best) place to engage people on health issues. Many people in key audiences may not be online, or at least not on Twitter, though I’m guessing many are on Facebook and other networks. The health conversation, like many others, needs many points of contact.
  • Real human health stories need more (and stronger) online voices to compete with the healthcare policy debate. Access to healthcare services is a huge part of good health but politicizing it is polarizing the discussion and making it hard to have real conversations about other aspects of good health (nutrition, food choices/prices/access, school lunches, active children and more).
  • More analysis of the health conversation wouldn’t hurt. How are real people talking about health (and healthcare) in Colorado? And the nation? There’s a wealth of data out there on the social networks waiting to be scooped up.

And what about the pot conversation in Colorado? Well, people use the word “pot” in connection with Colorado much more often than they do health or healthcare. Welcome to Colorado! We threw pot into this chart mostly for comparison’s sake. Seems that health and healthcare should be bigger conversations than pot. Something to aspire to (and maybe learn from) going forward.

As advocates, if we want to engage people online we need to know what they’re talking about and how they’re talking about it. Otherwise, we may be talking to ourselves.

Listening (to the right stuff)

Over the past few weeks, I’ve had the opportunity to work with Upwell, one of the most important and inspiring around these days.

Upwell works for the ocean. They do that with “Big Listening” (more on that later) to track the global conversation about oceans. They do it with minimum viable campaigns – lean tests of what works (and doesn’t) to change and direct the ocean conversations.

And they do it by sharing everything they learn with advocates, organizations, media, scientists and everyone else that cares about oceans (which should be each of you because, you know, the world’s surface is over 70% water and that gives oceans a big leg up on the global power chart).

Clyde listens

Tracking the global conversation about oceans (or climate change or voting rights or organic agriculture or anything you can imagine) has never been more important. The information each of us gets (or can easily find) is no longer controlled by a community (or national) newspaper or TV station and its editorial board.
Continue reading “Listening (to the right stuff)”

Should grantmakers act more like venture capitalists?

Should philanthropic foundation board members and staff act more like the venture capitalists who fund internet startups?

That’s the question our good friend Jon Stahl posed a few weeks ago. Jon’s focus was on the high level of involvement that venture capitalists often have with the companies they invest in. Lead investors typically have a seat on the board and often participate actively in the company, at least at the strategic level. Jon points out that foundation program officers, with portfolios that often run in the dozens, simply don’t have the bandwidth to engage much with their grantees.

I think it’s a great point; maybe there are ways we could refine the philanthropy model to offer grantees more support from their funders.

But the venture capital investment model has some other qualities that may or may not fit our social sector goals very well. For one thing, the VC model is designed to foster blowout success at the expense of everything else. In financial terms, a 2x ($2 returned for every 1$ invested) or even 5x return isn’t very interesting; the VC model is designed to produce 10x and 100x or even larger returns.

In fact, VCs have a lot of incentive to actually kill companies in their portfolio that don’t knock it out of the park. You probably won’t get funded in the first place unless you’ve got a great idea, a great team, and a great market, but if you don’t show aggressive growth in users or revenue pretty quickly, and then sustain that growth, the odds are decent that your VC will actually be part of shutting you down. A typical venture fund might see half or more of its companies fail outright, thirty percent performing modestly enough that the fund can get its investment back or perhaps make a small return, and only twenty percent doing really well. (The actual numbers are tough to come by, and there’s a lot of disagreement about exactly what they are, but we know that the huge hits are pretty rare and that lots of venture capital funds actually lose money).

The model might make sense on issues where our most desperate need is for a few blowout successes (and where we are comfortable killing off the groups that don’t achieve this level of success). For example, it might be perfectly reasonable for the Gates Foundation to fund malaria eradication programs using a VC-style approach, hoping that one of their high-risk-high-reward investments comes up with the solution we’ve all been waiting for.

But on lots of social sector issues, activists and funders are happy – and reasonably so – with moderate, sustained success. If a VC-style approach on malaria eradication comes at the cost of stable, sustained funding for effective malaria prevention efforts, it’s probably a much less appealing strategy. In fact, those “moderate” successes only look modest by comparison to absurdly high Google-style returns.

And on many issues there probably just isn’t a knockout punch waiting to be uncovered through high-risk entrepreneurial style investment by philanthropic donors. Preventing extinction and recovering endangered species is just hard work, politically and ecologically; there almost certainly isn’t a fantastically successful strategy just waiting to be discovered. We ought to have more sophisticated ways of measuring outcomes, and more effective ways of rewarding nonprofits that craft and implement successful strategies, but success across lots of fields won’t look like the 1,000x return that early Facebook investors walked away with. There may be some radical advocacy innovations waiting to be uncovered, but odds are good that most of our success will come through philanthropic investments with returns that look more like the equivalent of 2x, 5x, and 10x outcomes in the investment world. And even though these numbers look small compared to the superhits, they are still huge success: anytime a foundation invests $50,000 in a nonprofit and gets $100,000 or $250,000 worth of social change value out of the deal we all ought to celebrate.

The VC model also shifts enormous control over the company itself to the investors. It’s one thing for a social sector funder to have detailed expectations about how their grant will be spent, and perhaps to use the size of their grants to influence organizational decisions about staffing and strategy (which itself is enough to make many nonprofits very uncomfortable). It’s something altogether different when the funders actually control the organization itself.

Finally, the idea that funders might play a more active role in managing the organizations they fund carries as many risks as it does benefits. The best program officers offer real expertise about the issues they fund, they can draw on wide experience working with the nonprofits they fund, and can offer a higher-level strategic vantage precisely because they aren’t in the trenches on a day-to-day basis. But even the best are still at a distance from the day-to-day work, they often don’t have much experience on the other side of the funding equation, and they can be very prone to a favorable results bias.

In fact, while investors and entrepreneurs may not (and often don’t) share the same long-term vision, they measure results in a very consistent way: how much money is this company earning and how much is it worth. Philanthropic funders and the nonprofits they support may tend to have better alignment on long-term vision, but they rarely share a consistent and unambiguous approach to measuring outcomes. And this problem is only amplified by the strange power dynamics that characterize most grantmaker-grantee relationship. Deeper involvement by program officers in the nonprofits they fund comes with some real challenges.

I’m guessing the appeal of the VC model for Jon is mostly around the opportunities for nonprofit folks to learn from the experience and vantage of the funders they work with (not to mention the potential for funders to provide other kinds of resources to their grantees), and given how weak nonprofits usually are mentoring and professional development this makes a lot of sense. The trick, as is usually the case when drawing from outside models, is making sure we understand what those external models are designed to do and adjust the ways we mimic and poach from them accordingly.

There are other models worth exploring, as well. Angel investors often contribute much smaller amounts but expect much lower returns, which means that a moderate success can still be a success, and the angel investment model includes a lot of room for investor involvement and support. Crowdsourced funding models, with Kickstarter as a marquee example, might offer some insights. In many ways these models look a lot like traditional membership-oriented fundraising in the nonprofit world, but as federal law expands accessibility to true crowdsourced investment we can expect to see rapid evolution in the mechanics and structure.

I agree with Jon’s basic point that we should look at the venture capital model for ideas about improving philanthropic funding. I do think, however, that the VC model in particular has some significant limitations in a social sector context. The nonprofit world, at times, goes overboard when it pulls from other sectors, missing the nuance and context and overdeveloping some particular element that seems important. But we can learn a lot, too, by paying attention to other sectors, and we’ve got a lot to gain by poaching, adapting, and testing whatever we think might help.

Doubling Attendance in One Year: A Success Story

Santa Cruz Museum of Art & History attendance numbers.
I’m an unabashed Nina Simon fan, and I love this post on her Museum 2.0 blog about their growth in visitor numbers, how they pulled off the impressive growth she describes, and their plans for next year. This is the type of candid, under the hood, here’s-what-we-did-what-worked-and-what-didn’t writing that I think we need much more of in the nonprofit world.

The “five great ways to do something” lists (guilty), the “a great example of doing it wrong” posts (guilty), the big picture trends stories (guilty) … all of these can be useful, but often I find the posts that lay it all out there – good and bad, lessons learned, what they’re going to try next – to be the most helpful. There isn’t anything else like it: real social sector folks describing concretely and candidly what they actually did and what they learned.

We blogged about another of Nina’s terrific ‘lessons learned’ posts back in May in case you missed it (“Year One as a Museum Director … Survived!”).

Data Informed, Not Data Driven

This Adam Mosseri talk about how Facebook uses data to make decisions is a little dated but his observations are still extremely useful. His key insight: clear metrics and strong data-driven feedback loops can be powerful, but they have their limits as well. Facebook often uses solid empirical data to make decisions about their website design, their products, and the workflows that users experience on Facebook. They can test two versions of a website design, for example, and if design option A produces higher engagement than design option B it’s an easy choice.

But Mosseri also explains how an excessive fidelity to data-driven decisions can privilege incremental and uninspired changes at the expense of innovation and ambitious thinking. Facebook sometimes is aiming not only for high levels of engagement but for more fundamental changes in the way people interact with it and with each other. Facebook’s Timeline, for instance, inspired anger and fierce resistance among many Facebook users and sharp derision from the press, and the use of a conventional data-driven decision process would have killed it before it got very far, but Timeline is now a central and deeply-valued part of the Facebook experience.

Most nonprofits don’t seem to rely much on data for their decision-making about their websites, email newsletters, programs, and fundraising efforts, and when they do those efforts aren’t often carefully crafted and executed (some do, of course, but for every one that does there are many, many more that don’t). The remedy isn’t to swap all the intuitive and qualitative decision-making for analytic feedback loops, but to find a good balance. “Data informed, not data driven,” as Mosseri says.

Measurement and support for community, not you

Want to stir the pot amongst social media campaigners and their managers? Start a conversation with them (preferably when you’re all in the same room) about how they measure social media efforts. “Tell me,” for instance, “how you show the ROI of this work.”

Measuring social mediaPerhaps you’ll enter a coherent dialog on social media ROI across the organization (though we doubt it). If so, chances are good that the metrics discussed will be things like number of fans/followers/likes, number of comments, “people listening,” retweets and shares. You may get more programmatic correlations such as amount of money raised through Facebook or people that clicked a Pinterest link, came to the website and subscribed to the email list.

These numbers, however, say little about the value our work is adding to the life of the person at the other end of that like.

Most metrics are about us

The thing is, our social media metrics (heck, even our email and web metrics) are almost entirely about us, the organization. We assess our value and power by the number of fans, followers, subscribers as well as letters signed and cash in the door. And this informs our resource planning, staffing, program evaluation.

This is not terrible (at least not the cash in the door part).  These are informative data points if used in context.

But these are one-way relationship measurements. It’s as though we’re just in the business of selling shirts and all that matters is getting more customers in the door so we can sell more shirts.

But if I’m interested in sticking around as a company I really want to know what people think of the my shirts. How did it fit? Did it shrink in the wash? Did it fall apart?  Do you love it? Will you buy another one? Has it done the job?

How do nonprofits measure the value they are bringing to people’s lives? How do we get beyond discussions of tactics for getting more likes, retweets and impressions and move to learning about what is impacting people and creating the power to change communities? 

We must be able to clearly state how what we do relates to people’s lives. We need to understand precisely how our work matters to people before we can measure how we have helped people change their lives.

If you provide a direct service such as meals to the elderly, job training, or a bed for the homeless then you can measure the amount of such service provided. When it comes to social media, look for measures that tie your use of social media as closely as possible to that service. How many people knew about or took advantage of help based on social media? Look at social media metrics but also at client data. How did your organization’s use of social media affect use of services by your clients or audience?

Organizations that provide primarily advocacy services have a trickier time measuring benefit to audience and as a result use primarily indirect metrics. They infer from likes and shares that the audience is valuing (or not) their social media. These organizations, too, should directly and regularly query followers/supporters about the impact of social media on their actions and views.

Culture of community support

But advocacy organizations could also more directly seek guidance from social media about what advocates need, want, could use to help them be more effective. Social media (and email and the web) is an opportunity to have a direct conversation with the people that matter most to your work (and, no, we’re not talking about legislators or even your staff): your members, donors and activists.

Here are some guiding principles for helping your community:

  • Be deliberate about asking them what they need to be better advocates;
  • Provide what they ask for and test it, measure results and share feedback;
  • Be transparent with your community: share your intent and learning openly;
  • Identify and track people as they become more engaged (as well as the actions that they take to get there); and
  • Create a culture that encourages sharing advocacy stories, not just rants or odes of support. Instead of “great job” or “this guy stinks” we should strive to hear “this is what I did and here’s what happened.” (And if/when we get those stories, thank the people that share them.)

Focusing on how the community can become better advocates and supporters of one another will build and spread power, create longer lasting change, and take advantage of the interactive nature of current communications channels.