Survey Design Best Practices

Carlos Del Corral, one of the founders of Lumoa has a long experience with creating and designing surveys. He has worked within market research and product and service design in companies like Microsoft and Nokia. In this video, he shares best practices and tips on how to create surveys that will not only increase your response-rates but also give you more insightful responses.

A transcript of the video is found below

Hello everybody! 

Thanks for joining our webinar today! 

I’m Carlos Del Coral, I’m one of the founders of Lumoa. 

I will be talking today about survey design. It’s a very common topic and I thought it would be useful to share some of our experience and best practices around this topic. 

I can see that there are still people joining, but of course later on, if somebody misses this, they will get a recording of the webinar. 

Hopefully, everybody can see my screen and see and hear me properly. I’m going to start then with the webinar. 

Survey Design: Agenda

So today what we are going to talk about the principles of survey design, what types of questions that you can use, some do’s and don’ts and best practices around creating those questions, and then last but not least, how to generate insights from those results.

Disclaimer

Let’s get started and first of all, I wanted to add this disclaimer.

I have myself led a team of market researchers, I know that market research and survey design is a very wide extensive, and complex field, so we are not going to cover all the topics today. You will not hear anything about types of questions in terms of conjoined design for example max thief etc. 

There’s a lot of things we will not talk about and today. 

We will mainly focus on basic surveying principles and recommendations that are especially valid for voice of customer programs where there’s a kind of continuous measurement of cx, something called touchpoint measurement, some relationship measurement where you every day want to keep a finger on the pulse, to understand what a what’s the situation with your client.

This is not this long-term research where you try to estimate trends nor is it meant to be for a very complex portfolio.

Having that said, we’ll go through the principles of surveys and these principles apply to any survey actually, so they are very simple. 

[2:15] Principles of survey design

  • Start with the end goal in mind
  • Simplify
  • Avoid bias and priming
  • Optimize for automated insight generation

Start with the end goal in mind, so this is I think true for any project that you want to start within a company or within your personal life.

The second one is around simplifying, we at Lumoa are very big at simplifying, we always aim to make things easier and simpler. 

Avoiding bias and priming.

Also optimizing for automated insight generation, this is a topic that is not that common because we have a lot clients designing surveys, and they are not used to have tools like Lumoa that can automate most of the analytics process. So I wanted to have a very short slide on this one.

Start with the end goal in mind

The goal that you set for your survey will define every aspect of the survey, including its complexity:

  • How you need to do it  
  • Who do you need to interview 
  • What kind of results you will get

There’s a lot of different areas, so it can be around the overall relationship with your brand, it can be about the experience in a specific touchpoint – e.g. after a customer service call you want to understand how the interaction went.

It could be about analyzing marketing campaigns – testing marketing campaigns that you are planning to find the optimal price for a service, that maximizes your profit overall.

Filing up the portfolio, product portfolio mix.

It can be about anything, so it’s very important that before you get started, that you understand what you want to achieve. It’s about to defining every step.

Simplify to maximize response rate and reduce work

Simplifying your surveys will maximize your response rates, and reduce your effort, and also reduce the effort for the customer or respondent.

Simplifying is always around having fewer surveys, fewer questions per survey, easier questions, scales that are easy to understand. It is about making everything as easy as possible.

I know it’s very easy to have question creep, it’s very common in surveys. Especially in big organizations, where you start asking stakeholders – everybody will have an interesting question that they would like to add to the survey. 

You should always try to think very critically about it, you should think if “this question” is really needed or not. You should always try to aim for shorter surveys, rather than longer ones. 

Structure your survey to avoid bias and priming

The next one is around how you should structure your survey to avoid both bias and priming.

There are three different steps:

  1. Start with a screener
  2. Continue to generic questions  and concrete questions 
  3. End with classification questions 

1. Start with Screener

What a screener means, in case you are not familiar with that word, is that if you are aiming to research a very specific segment or group of people, you need to add to the beginning of the survey, a set of questions that will allow you to define whether that person belongs to that group or not.

It can be age-based, it can be gender-based, it can be how affluent a household is etc. In that way, you can ensure that the responses that you get, are relevant to your survey.

When you’re surveying your customers, a screener is usually not needed. If you are doing a survey for a touchpoint, for example on your website, and you want to understand, or make sure that the sample is representative, there are usually no screening questions. Screening questions are more often used in traditional market research.

2. Ask generic questions first and then concrete questions

Ask generic questions first instead (if you don’t need a screener). Generic question means that you ask questions about the overall concept, and then move to more concrete questions.

If we take Lumoa as an example, if we wanted to know about our customers overall relationship with Lumoa, and also, about relationship with our online dashboard, and relationship with customer service.

We will always first ask about the relationship with Lumoa, and then about the different concrete topics.

The reason for this is that if you ask about the concrete topics first. Let’s say that I asked first about customer service, and after that I ask about the overall relationship they have to the brand. I’ve already put the seed, or the thought of customer service in the respondent’s mind because they have been asked about it and they have been thinking about it already.

This means that later, they are much more likely to answer around customer service, or with responses around customer service to the generic questions, because you have already primed them. They will not think “overall”, they are primed or biased to think about a specific topic.

3. End with the classification questions

These are questions around demographics. For example, age, gender, household income etc. 

The main reason for having these questions at the end of the survey is two-folded;  

  1. If you start the survey by asking classification questions, people may start thinking “why do they want to know all these things about me”. Therefore, it’s better to start with the real questions around the surface of the things you want to know. 
  2. The second reason is that these classification questions are usually very clear for the respondent. If you ask about the age of a person it’s extremely easy to answer that question, because it’s very clear basically. You don’t need to make an effort to answer the question, and they are fast to answer, in the end, the drop-rate will be smaller. 

Optimizing survey design for insight generation

Next, I want to talk about optimizing for automated insight generation. This is again to reduce both your and your respondent’s effort. I will talk a bit more about this when I talk about the do’s and don’ts of the surveys. 

The main point here is that in most cases, you can actually go by, or solve with a simple KPI and a “why” question. 

By asking “how satisfied are you with our product” and “why”, you don’t need to ask multiple different questions around different features of the product. They will tell you in the “why” question. 

This way of asking questions (“why”) hasn’t been used since there haven’t been tools to analyze those open-ended questions properly. But now that you have tools like Lumoa, you don’t need to ask for many concrete things.

As an example, one of our clients was asking an overall NPS question, and after that they were asking a lot of “rank a satisfaction” for a lot of different features. 

After the first analysis (in Lumoa), they realized that they were even getting richer responses from the “why question” in the NPS question, than from the specific scales that they had.

That’s why we decided to drop most of the responses, or sorry most of the questions, and the survey length was reduced by around 80% percent. 

This is the power, and the beauty of text analytics, so when you simplify, you get higher response rates, you get richer data and it’s also easier for you to analyze the responses. 

[10:19] Different types of questions

Now I will walk you through different types of questions. There are mainly three types of questions;

  • Behavioral 
  • Attitudinal
  • Classification

Behavioral questions

Behavioral question is used to understand what respondents do, for example;

  • “How often do you visit the doctor?” 
  • “How much spread do you buy in a typical week?” etc.

These are questions are used to understand the behavior, or the behavior of your respondents. Ideally, you should already have this kind of data from other sources. 

If it’s about digital behavior, ideally you would already know the behaviors of your customers. For example, how often do they access a specific service (Google Analytics), or how much they purchase (CRM) etc. This means that you wouldn’t really need to ask these kinds of questions in your survey. 

Attitudinal questions

These are questions aimed to understand how respondents feel and uncover their emotions. Some examples of questions like these are; 

  • “How would you rate our product?” 
  • “How likely are you to recommend…?” (e.g.typical NPS question) 
  • “Why did you give us that that score?” (open-ended questions)

Open-ended questions are especially important because they let all the emotions, sentiments, and feelings come through. It’s not the same as just getting a score of 9 in an NPS rating, but they are saying that they love your product and why they love the product. The data is much richer, and it’s much easier to get insights from that.

Classification questions

With classification questions, you get to understand who your respondents are. This is when you e.g. ask about;

  • Age
  • Gender
  • Household income

Again ideally, you would have this information already in your CRM. For example, you wouldn’t need to ask for which company your respondent is working at, or things like, a company income. Ideally, you would have all these things in your CRM. 

If you’re interviewing people outside of your reach, for example, if you have a brand tracker or a panel, then these types of questions may be relevant. 

Metric selection: Optimize for impact within the organization

Then I wanted to talk about two topics that are very controversial – selecting metrics and scales.

Metric selection

Regarding metric selection, we are very pragmatic, or actually, in both topics, we are very pragmatic about it in Lumoa. Our point of view is that you should ask, or you should optimize – driving impact within your organization.

There are many types of different metrics you can choose from, you have e.g. CSAT, NPS, and Customer Effort Score score. There are many different calculations, and there’s a lot of theory around how you can optimize for different touchpoints, to get better results for your KPIs. 

However, instead of optimizing for a marginal improvement on a touchpoint basis, we recommend you to simplify. Be as simple as possible, use as few different metrics as possible, and be consistent with those metrics and those scales. 

Driving change within an organization and driving customer-centricity is already difficult, especially in bigger organizations the harder it will become. Every time you add a layer of complexity, it becomes more complex. If you only have NPS, you only need to explain one metric to your stakeholders, or the organization.

But as you start adding more metrics, “okay now we have customer effort score” or customer satisfaction, there’s a different scale in use, and the calculations are slightly different so then people start losing touch with these metrics. They start to lose understanding of what they mean, and it’s more difficult to practice.

Our recommendation is to;

  • Simplify
  • Consistency
  • Reduce the number of metrics in use

If you don’t like to use NPS, then there are other metrics (there are some pitfalls of course with NPS) and they are widely known/used. That doesn’t mean that you need to choose e.g. NPS, just choose a metric and choose a scale and be consistent. 

Scale selection

Regarding scale selection, this generates a lot of debate within the market research field, and again, in this case, we are very pragmatic. We think you just need to choose wisely, and again, be consistent to measure and use the same scale, if possible, in your touchpoints and in different areas of your business.

Keep the scale consistent so you can compare and measure the improvement, just focus on improving. 

The scale will largely be dependent on the KPI you choose. Some use standard scales like NPS,1-10. 

We would strongly recommend avoiding creativity when selecting a scale. Some companies that I’ve seen in my past had chosen for example NPS and a scale from 1 to 5. The question is – why would you choose NPS if you’re going to change the scale? 

Because then you cannot look at the benchmarks anymore. If you choose a standard metric or a standard KPI, I would recommend you to stick to that definition. Otherwise, 5- and 7-point scales are the most common scales to use, so from 1 – 5 or from 1 – 7. It’s because they are symmetric and they have a neutral middle point.

In some cases, you may want to force a choice. If you want to force your customers to tell you if they are happy or unhappy. In these cases, a 4-point scale is a good option, because you can have very dissatisfied, dissatisfied, satisfied, and very satisfied. 

But again, the key point is to be consistent and keep measuring change and improvement.

[17:05] Creating Questions: DO’s and Don’ts

Let’s go through some do’s and don’ts when creating questions.

Avoiding bias: Symmetrical scales

For example, one important thing is avoiding bias. Make sure that you have symmetrical scales. This means that you have the same number of positive response options as negative response options.

On the left-hand side, you can see we have a 5-point scale, there’s a middle point and there are two negative options; dissatisfied and extremely dissatisfied, and two positive options.

On the right-hand side, we have an example that shows you what not to do. It’s actually more common than you may think, and many people don’t even realize it, here you see there are many more positive options (than negative options). 

That means that you are going to get better scores than what you should actually get, just because the likelihood of a negative response is smaller (due to fewer negative options). 

 Avoid bias: leading respondents

“How satisfied are you with…?” 

VS:

“At xxx we are continually striving to achieve a 5-star satisfaction rating and we place a lot of effort on your experience. How satisfied are you with…?”

I’ve seen in the past that some companies try to be very positive in the way they are phrasing the question, and say things like in the example here, “… we are continually striving to achieve a 5-star satisfaction rating we place a lot of effort on your experience” and then they ask like “how satisfied are you with so this…?”

Consciously or unconsciously, this is biasing respondents to provide a positive rating. The tone when you ask a question, should be kept as neutral as possible to avoid driving respondents to give you a negative or positive score or answer. 

Ask one question at the time

You should always ask one question at a time. 

“How would you rate our products?”

VS.

“How would you rate our products and customer support?”

When you start mixing multiple questions into a single one, it starts becoming difficult for the respondent to answer accurately on the question. They have to start thinking, so “that’s around the product, and around customer support”, that will most likely lead to fewer rich answers or incomplete answers.

Simple, specific, and short questions

Always try to be as simple, by having as specific and short questions as possible.

“How do you feel about our mobile application?” 

VS.

“You have been using a mobile application for a while now. Compared to the applications of other providers how do you feel about our app?”

In this case, the question gets very long if you have several questions, the respondent will get very tired of reading the questions. The more simple, specific, and to the point you are, the better for the respondent.

Avoid questions that require respondents to perform calculations

This is typical for longer surveys, where you want to understand behavior.

“How many liters of juice do you buy in a typical week?”

This is easy for a respondent, or easier respondent to calculate – “okay I drink a glass per day, five days per week, so it’s around a liter maybe.

In some cases, some would ask: 

“How much juice do you buy during a year?”

That’s much more complex for the respondent to calculate, compared to the previous example. Once you know how many liters of juice they buy a week, it’s very easy for you to multiply it by 52, and you get an average or a better approximation. 

Avoiding jargon and acronyms

“What is your opinion about the terms of service of our product?”

VS.

“What is your opinion about the TOS of our product?”

These acronyms or specific jargon (TOS) that relates to an industry field. The only thing they do is confusing the respondents, and they may not even answer or they may even give you the wrong answer, if they don’t understand what it means.

Optimize open-ends for richness

“Why did you give that answer?”

VS. 

“What should we improve?” / “Why are you satisfied?” 

Sometimes when respondents give a low score many tend to ask, “what should we improve?”, or if they give a high score, many tend to ask “why are you so satisfied?”. 

The problem with this is that if you ask – “what should we improve?”, respondents tend to provide shorter answers without context, without sentiment, or feeling or emotions, attached to it. 

If you ask “what should we improve?” somebody could answer “customer service”. Just getting an answer saying customer service  – you cannot attach any sentiment to that.

We always recommend asking “why did you give that answer?” and then the responses tend to be more in-line with “customer services slow” or “customer service did not help me”, which will give you more richness around the problem. It’s also more likely that they will provide full sentences that are easy to analyze automatically.

Another thing that we have seen is for example when you ask customers to go through hoops, and they have to put a lot of effort before giving you feedback, because the company is not prepared to analyze that feedback properly.

I’ve seen cases where a company makes the respondent first select “if your feedback is a praise, complaint or a suggestion?”. 

If it’s a complaint, you need to click complaint, and let’s say it’s a technical problem, you have to click technical problem, and then they start asking if it is the mobile app, or if it’s the website, and then you click mobile app and then they start asking to take a screenshot etc.

That creates a lot of effort for the respondent, and many times they just leave the process unfinished.

Once you have a tool like Lumoa, you can just ask “why” and if they have a problem with the mobile app that is not working, or they cannot log in, they will tell you. 

And because you are analyzing everything automatically, you will get that information in the topics of the negative sentiment, and then it will raise in your impact charts.

[24:25] Generating insights from results

The easiest way to generate insights from customer feedback is to use Lumoa. 

With Lumoa you can have all voice of the customer in one platform, it can be touchpoint surveys, it can be phone call transcripts, it can be chat transcripts, it can be online reviews – anything you can think of. 

You can have all the customer touchpoints in Lumoa and follow them automatically. It eliminates manual work when you create the insights, it allows you to easily share, and Lumoa also allows you to close the loop. You can follow-up on tasks and see what customer feedback you’re acting on, what tasks are closed, and who is responsible for what etc. Lumoa also offers real-time analytics in over 60+ languages.