Take 10 companies that deliver a great experience — chances are, you’ll find 10 very different customer satisfaction surveys that each of those companies rely on to improve their experiences.

But regardless of what business objectives drive a survey, there is one thing that everyone can agree on: shorter surveys are usually better. Burden customers with a lengthy survey, and they’ll have a poor experience completing it. This can lead to a drop in response rates. Too many questions can also make customers impatient — driving them to answer thoughtlessly in the hope of finishing sooner. Additionally, length can lead to redundancy in questions, which in turn skews the analysis of survey results and impairs your organization’s ability to make decisions using customer feedback.

So how can you streamline your existing survey?

The following process can help you whittle down to the most impactful and unique questions. We like to think of it as part science and part art – relying on statistics to measure impact and uniqueness, but ultimately using your business acumen to choose which questions are the most valuable. To complete it, you’ll need two things: any basic statistics tool, and recent survey data. The more of the latter you can leverage, the more accurate your results will be.

Step #1: Identify questions that might be capturing redundant information

An important first step in condensing a survey is identifying questions that might be redundant. Redundancy usually occurs in questions about “drivers” — customer experience factors that “drive” satisfaction.

A way to identify redundant driver questions is to measure their correlation, or the degree to which their answers follow the same trend. Generally, if two questions have a correlation of .85 or higher, there’s a chance their redundancy is problematic. But determining this for sure requires a measure of intuition. Look at how the two questions are worded — does it seem likely that customers would interpret them in the same way? If so, you might eventually decide to remove one of them. If not, their correlation might be the result of something else. Either way, the upcoming steps will help you decide what to do with these suspect questions.

There’s another way to check your survey for redundancy: measure the “variance inflation factor” (VIF) of all of your drivers. Essentially, this is a way to tell how much your survey is suffering from redundancies overall. If the VIF is higher than 5, chances are the survey is still measuring a single general concept through more than one driver question. This is a good test to run after every step, to see how close you’re getting to a truly efficient survey.

One other analytical tool you could use is a factor analysis. This kind of analysis will give you insight into how many “underlying factors” are present in your data and how well the given questions capture those factors.

Step #2: Eliminate questions with untrustworthy results.

After Step 1, you should have a list of driver questions and pointers to which ones are redundant. Next, you’ll see if any of those questions have unreliable results.

Specifically, you’ll ask about each question, “Does it predict something about overall customer satisfaction or likelihood to recommend?” Using a multiple regression analysis, measure two things: the question’s level of impact (captured by its beta coefficient) and the strength with which you believe the level of impact (captured by the p-value of that beta coefficient).

You should focus on the latter factor first. In most cases, scientists aim for 95% confidence in their results — meaning that in a regression model, a given variable would have a p-value of 0.05. Then, you can believe with 95 percent confidence that the variable has an impact on your outcome variable (in this case, customer satisfaction or loyalty).

So what does this mean for your survey? Consider a hypothetical survey whose driver “helpfulness of store attendant,” is found to have a great deal of impact on your NPS score (i.e., a high beta coefficient). If your analysis determines that the driver’s p-value is 0.83, you can only claim that high impact with 17 percent confidence. And if “helpfulness” is one of the drivers you suspect to be redundant, its untrustworthiness means you’re probably better off eliminating it and resolving the redundancy debate straight away.

Step #3: Eliminate questions with negligible impact.

Next, consider the size of the impact different driver questions have on overall customer satisfaction or loyalty. In your regression analysis, this is measured with beta coefficients. Using this information, you’ll look at potentially redundant pairs of questions and remove the question with a low impact on the overall customer experience.

For example, imagine a hotel’s “clean bathroom” driver has a beta coefficient of 0.1. For every 1-point increase in a customer’s “clean bathroom” score, the average overall satisfaction score will increase by 0.1 points. However, if the driver is highly correlated to “overall suite cleanliness”, which has a higher beta coefficient of 0.3, the question about “clean bathrooms” can likely be eliminated in the name of reducing redundancy.

Step #4: Factor in coachability and actionability

One additional consideration when making the tough decision of which questions to cut is coachability. Consider two correlated drivers in a hypothetical call center survey: “Agent went out of their way to provide excellent service” and “Agent was a helpful service representative.” The former might have a beta coefficient of 0.25 — higher than the latter’s 0.2. But if the company feels it’s easier to coach agents on being “helpful” than “going out of their way,” it might ultimately side with the driver with the lower impact.

Step #5: Repeat previous steps – and check your work

If your survey still has more driver questions than you’d like, repeat steps two and three. Re-running these analyses could identify other drivers whose results you can’t trust, or which have a negligible impact on the overall customer experience. One approach is to continue repeating these steps until all remaining drivers have a low p-value.

Once you’ve completed this process, a final check can ensure that remaining driver questions still capture all important aspects of the customer experience. Using pre-existing data from your remaining questions, figure out: if you’d just asked these questions, would you still capture most of the factors that drive the customer’s satisfaction? Statistically, this fit is captured by the R^2, which in a regression model tells you how much of the response variable variation is explained by the model. For example, an R^2 of 0.85 indicates that the model explains 85% of the variability of the overall metric.

Ideally, shortening your driver list should not drop R^2 too much, as the goal of this exercise is to retain only the most important drivers (those with impact and explanatory power). If the R^2 value has dropped significantly, you can resolve this by adding back a question or two.

By following these five steps, you can determine which driver questions to prioritize as the most important and comprehensive influences on your overall customer experience metric. The result is a much better customer experience when completing the survey, as well as higher response rates and higher accuracy in customer responses.

And if you’re curious to learn more about survey design, check out the resources and videos here: www.medallia.com/survey-design

Photo Credit: Chris Isherwood