The 6 Pillars of Successful Survey Design

The 6 Pillars of Successful Survey Design

완독 시간: 5 분

When you need to understand an industry or market, research is essential. You can gather research through secondary sources — trade publications, published reports, websites, etc. — but this is essentially skimming the surface of publicly available information. Secondary research should be considered a supplement to your primary research. It should form a foundation that you can build on, not serve as an end in itself.

Primary research, such as interviewing experts in your field of interest for first-hand insight, can give you the kind of granular intelligence that adds depth and nuance to your understanding. But, depending on what you want to know, a market won’t always unfold itself during an expert interview. What does the customer want? How do current market entrants judge the broader impact of a certain trend? What do most industry players think is down the road? For these kinds of questions, a survey is in order.

Conducting a survey as part of your data collection can give you richer insights. Surveys are a great way to get a high volume of data when having a larger sample size is critical. They’re also a great tool to get a better understanding of various industries or more in-depth insights.

These six pillars of successful survey design will maximize the insights you glean from your data:

  1. It’s critical to have clear research objectives. The objectives are the drivers of your design. Are you looking to understand how customers feel about a particular piece of software or do you want to understand key purchasing criteria? Are you helping your client better understand a potential market to move into? Write down what you hope to accomplish with the survey and let those goals be your guiding light.
  2. It’s imperative to have a strong screening section at the beginning of your survey. The screener is a series of questions, usually about 5 to 10, to make sure the right respondents get into your survey. Think of the screener like a funnel, with the broadest questions around industry and geography at the top, and more detailed questions around job responsibilities or familiarity with the survey topic near the end. Be mindful of people’s time and terminate them immediately if they’re not eligible for the survey. Another key function of the screener is that it hosts your quotas. For example, if companies with 5,000 full-time employees (FTE) are interesting for your client but you want only a small percentage of them, include a cap for that group’s respondents in your survey.

  1. When you’re looking to write your questions, eliminating biased and leading questions will help ensure that respondents are answering truthfully.
  2. Question flow is important to respondent experience. Like screening, questions should be structured like a funnel with broader topics first, focusing on things such as overall budget and general key purchasing criteria. Those should be followed by more granular information, such as specific spend and key purchasing criteria by vendor. Finishing one topic before moving on to another can help to keep the respondent engaged and minimize any respondent confusion.
  3. Pathing can be a sensible option if you’re looking at multiple groups in your survey. For example, if you’re running due diligence on a healthcare product and are interested in finding out more information around users and nonusers, create separate pathing for them with two distinct question sets. The screener can help you identify who belongs to which group. After the screener, you can build out programming logic to send participants to the appropriate questions. This helps keep participants engaged and will streamline your analysis, as all your questions are grouped together.
  4. All questions need to be easily understood. If respondents are confused about what is being asked, it could impact the data. Make sure all questions are simple and that you’re using minimal language. Don’t ask double-barreled questions, such as, “Which of the following brands are you aware of and which would you recommend?” Awareness and recommendation are two different topics, and they should not be combined in the same question. Just because a respondent is aware of a brand doesn’t necessarily mean they would recommend it.

Key Survey Considerations

Along with these pillars, there are some key considerations to keep in mind as you create your survey draft:

  • How long will a respondent take to complete a survey? The shorter, the better — less than 20 minutes is ideal. If a survey is too long, respondents may drop out or experience fatigue, which could result in lower-quality data.
  • Open-ended questions can be a valuable part of your survey and are a great way to pull in additional qualitative insights. Keep the number of these questions to about two or three per person. Stick to quality over quantity. Too many open-ended questions can fatigue respondents and lead to a high dropout rate.
  • Surveys are often taken on mobile devices, so be mindful of the formatting of questions, limiting matrixes and grids and things like that.

Two Survey Metrics

Finally, there are two metrics to consider. The first is incidence rate, the rate at which respondents qualify for the screening section of the survey. The rate can have a direct effect on feasibility for a survey. One way to increase incidence rate is to expand and broaden your screening criteria. Giving the opportunity for additional respondents to qualify can certainly be helpful to increase your incidence rate and broaden your questions.

The other metric is dropout rate, which looks at participants who have “dropped” out of, or exited, the survey without completing it. If the rate is high, it could be a result of something such as a programming issue, poor survey design, too many open-ended questions, or length. These respondents very likely won’t reengage with the survey.

Being mindful of all these things will help you design a survey that will likely uncover amazing insights.

Beth Simon is GLG’s Director of Surveys. She has more than 12 years of quantitative research experience in both the B2B and B2C space as well as a variety of methodology executions, including conjoint, concept testing, brand health testing, and customer satisfaction.