Back

Running Impactful Community Surveys

A guide on planning and implementing surveys successfully to inform your editorial or product strategy.

Image by vectorjuice on Freepik

INTRODUCTION 

Working in journalism, you make many decisions in your day-to-day routine. What should your newsletter be about? What kind of content would make more readers become subscribers? How can you redesign your site to increase user loyalty? Whenever you don’t have a clear answer, your manager or someone in your team will likely say, “Let’s run a survey to find out!” This is tempting because surveys are cheap compared to other research methods, and they tend to be seen as more objective, straightforward and accurate. They are also one of the few methodologies that allow us to get quantitative and qualitative data at the same time, which can be a great advantage to get both representativity and depth. However, as Erika Hall says in her book Just Enough Research, they are often misused and misunderstood.

Surveys are one of the most challenging research methods because there are many ways of doing them wrong: incorrect generalizations, unconscious biases, unnecessary questions, unclear language, or annoying question flows. In the best case scenario, these flaws can waste precious time and in the worst case scenarios, they end up alienating or upsetting the same people that you’re trying to serve. However, when done correctly, surveys can be incredibly valuable to help you understand your audience and make more effective editorial and product decisions. 

In my experience at National Geographic’s UX research team and when leading NPA’s first community survey in 2022, I’ve learned some good practices to avoid falling into the most common traps. 

Here’s a 7-step guide to plan and execute impactful community surveys in a news organization.

IN PRACTICE

1) Map and prioritize knowledge gaps 

We want to know as much as possible about our audience, but this can lead to the trap of asking more questions than people have time to answer or we can do something with. This is why you should determine what decisions you aim to inform first. Will the survey results inform your homepage redesign? Will the results help you decide what topics to include in your next newsletter or define your new podcast’s length and format? 

Identify missing information or knowledge gaps that a survey can inform to help you make better decisions. In the process,  involve every relevant stakeholder to ensure that your research is aligned with your organization’s mission, strategy, and priorities. 

Be specific. If your questions are too broad, you risk accumulating useless information that will waste time and discourage your organization from doing research again.

2) Make a research plan and timeline

After identifying what knowledge gaps you need to fill, assess whether a survey is the right research approach. Would other methods, such as interviews or focus groups, be more effective in answering my research questions? Evaluate the pros and cons of each and go for what makes sense with the resources you have. 

If you decide to do a survey, are there any other research methods you should pursue to help give context to your data? For instance, follow-up interviews with specific respondents might provide more depth and foster a better understanding of what they meant to say and why they said it. 

Research takes significant time, and you’ll want to share a clear timeline of the research process with stakeholders in your organization. For example, a Gantt chart might help ensure your research doesn’t go out of scope and that everyone involved has a clear understanding of what the next step is.

Research Plan Timeline - Gantt Chart

3) Plan your survey’s flow and questions

This step involves coming up with the survey’s text and structure: How are you going to make users feel compelled to complete it? What questions are you going to ask, and in what order, to make sure you get the answers you need? It’s better to start with questions that are easier to answer and end with the most complex or open-ended questions. 

If you ask sensitive demographic or identity-related questions, it is a good practice to explain why you’re asking and how you will treat that information. Make sure multiple-choice questions have enough options for most people to feel included but not too many for them to feel overwhelmed. 

During the planning phase, it is key to empathize with your community. Remember to use language they will understand, and consider not only the individual questions, but the whole journey of the survey.

3) Test it

You think you know how people will react to your survey. Chances are, you’re wrong. It’s important to test our unconscious biases and how they’ll impact respondents’ interpretation of our survey questions before it’s too late to change the survey. That’s why this step is key. 

To test your questions, find a group that is either a small sample of your community or that has similar characteristics to your target respondents and have them complete the survey. Was there anything they didn’t understand or that could be said in a more simple, straightforward way? Face-to-face feedback is always the most helpful because people are more willing to talk about what they find odd or unrelatable, but asynchronous written comments can work well, too. Based on the analysis of their results and feedback of your test group, make changes to ensure the answers you’ll get are as relevant and valuable as possible. 

4) Launch (and monitor) it

After you launch, it is good practice to closely monitor the results in real-time. Are you getting enough responses to guarantee that the data will be representative enough? If not, is there something you can adjust in your plan without biasing your results too much? For example, you can focus promotion on certain demographic groups that need more representation, extend the deadline for the survey or add additional incentives for completing it. 

When monitoring answers, you might also feel tempted to modify questions — either to make them more clear or to fix mistakes. However, you must remember that any change after launch could significantly affect your results and ability to interpret insights. So, if you have to make changes, you should always ask yourself how that might bias your data and take that into account when analyzing the results. 

5) Analyze Results

Now that you have results, ask yourself: What story is the data you collected telling? How do responses fill in the knowledge gaps you identified in step 1, and what new questions do they lead to?

For quantitative results, building charts and graphs can quickly visualize trends or anomalies. Many tools, like Airtable, have built-in features to create data visualizations quickly. For qualitative data, you’ll need to digest the responses to illuminate common themes supported by concrete quotes. Tools like Miro or Jamboard can be helpful brainstorming tools, as you can create virtual post-it notes to sort and group responses. At this stage, working collaboratively with others will help reduce your personal biases when identifying key insights. As a team,  follow a common framework for organizing insights with concrete data that supports them. 

Whatever the format, make sure that you clearly understand what the most salient insights are and how they answer your initial questions, to what extent they are representative of your audience, and what new questions arise to help inform your editorial and product decisions further..

6) Communicate & Recommend

Once you understand the survey results, you need to share those insights with stakeholders so that they can easily understand and apply those insights to the decisions they’re making.  In this stage, you don’t want to focus on the information that’s most relevant or interesting to you, but rather, what insights are important to others who have not been as involved in the process. Empathize with the stakeholders you’re going to present your results to and evaluate: In what instance and context will they be most able to receive the information, and in what format will they find the most helpful? Is it slides, a written report, a readout, an email with bullet points, or a combination of the above? Whatever format you choose, avoid focusing on the research process (the How?) and make sure you focus on the Why (how the results  can inform your organization’s priorities), the What (concrete findings), and the So What? (Recommendations and/or next steps). If those are not clear or straightforward enough, you’ll risk all your work from being lost among other pressing priorities. 

7) Track Impact

Community surveys require a significant amount of work, so it is essential to keep a record of how helpful they were for your organization. Schedule time a few months after your research is done to find out what decisions your survey was able to inform or influence. Did it help the product team design a new product? Was it helpful for the editorial team to increase engagement with one of the newsletters? Ensure you get input from stakeholders, and don’t forget to document it (this impact tracker template can help). The better you keep track of the impact of your research and communicate it, the more likely it is that you’ll convince your manager to invest additional resources for more research. And the more research you do, the more prepared you’ll be to better serve your community.

RELATED READINGS

TOOLS & TEMPLATES 

ABOUT THE AUTHOR

Carla Nudel is an Argentinian product thinker working on the intersection of audience research and media innovation. She moved to New York to pursue Studio 20's Journalism Masters Program as a Fulbright scholar and she is now working at The New Republic Magazine as Product Manager. In the past, she worked in other digital strategy positions for organizations like Radio Mitre and National Geographic. She’s also trained thousands of Latin American journalists on innovative tools during her Teaching Fellowship at Google News Lab. 

Share this

Share in your social media