15/03/2024

  Surveys are one of the most frequently utilized research methods by UX designers and researchers. According to a 2019 Nielsen Norman Group study, 99% of UX researchers who responded said they run surveys “at least sometimes,” indicating that the practice is nearly ubiquitous. Unfortunately, this frequently used method is often applied incorrectly, resulting in unreliable and useless data.  

Criticism of Surveys as a Method

Despite their popularity, surveys have a shaky reputation amongst UX thought leaders, and have earned both words of caution and scorn over the years.

In her book, Just Enough Research, design consultant Erika Hall wrote that,

“Surveys are the most dangerous research tool—misunderstood and misused. They frequently blend qualitative and quantitative questions; at their worst, surveys combine the potential pitfalls of both.”

UX thought leader Tomer Sharon once tweeted,

“I usually recommend product teams (and most others honestly) to stay as far away from surveys as possible. I truly believe surveys are the hardest research method to do well (yet the easiest to launch in the next 10 minutes).”

Benefits of Surveys in UX

Despite these dire warnings, surveys offer several benefits that make them a valuable asset in the UX toolkit.

  • Are cheap: Surveys can be relatively cost-effective for collecting quantitative data that can be used to make user-focused and research-backed design decisions.
  • Bring insights from real users: Intercept surveys (in which a popup survey is presented to users at specific places within a website) allow researchers to glean insights from users during their visits to your site.
  • Play well with other methods: Surveys pair well with 

Survey Myths, Busted

Even with all these potential benefits, surveys often fail to deliver on their promises.

In many cases, a survey is doomed from the very start because it is the wrong method to employ for the given research goal. Deciding which method to use in order to answer a given research question is always an important (and often overlooked) part of the research planning process.

Surveys are disproportionately prone to misuse and are often selected for the wrong reasons. In many instances, the ill-fated decision to run a survey is often due to the belief in one of the following easily dispelled survey myths.

Myth 1: Surveys Are Easy to Run

People often focus on how easily you can get a survey out the door, rather than on how much skill and energy you should put into doing one correctly. The proliferation of free or cheap and user-friendly survey tools offers the incredibly seductive possibility of tossing a handful of poorly conceived questions into a survey and blasting it off to your customer list before you have time to even question what you’re doing. It is too easy to forget that survey methodology is, in reality, incredibly complex, combining elements of both quantitative and qualitative research into a single study. The smallest tweaks to a given question may yield vastly different results, or even dramatically decrease a survey’s reliability.

Just as with any research method, a team should only embark on a survey project when they have enough runway to do so with appropriate rigor, which includes sufficient time for the building, running, and analysis of the survey.

Myth 2: Large Samples Are the Only Road to Reliability

This one is painfully familiar to me as a consultant. Here is an example of the type of red-flag-riddled request I have received countless times from potential clients:

“We want you to conduct 10-12 user interviews and 5-7 usability tests for us, so that we can make X design decision. But here’s the thing: I understand qualitative research. I’ve read Jakob Nielsen’s famous article about testing with 5 users. I get it. But my CEO doesn’t trust small numbers. They want statistical significance to feel confident in making a decision. So, to satisfy them, could you also run a survey with 100 people?”

At first glance, this seems reasonable. After all, mixed methods research is a time-tested technique. However, here are the 2 problems with this request:

  1. Frequently, the research questions at the heart of the request do not lend themselves to surveys. For example, they may be qualitative or behavioral questions, that would require a poorly written survey that will be prohibitively costly to analyze in order to get valid and useful results. And even then, because the results will be based upon qualitative, open-ended questions, the statistical significance the CEO is seeking will continue to elude them. (Statistical significance can be calculated only using answers that can be assigned a numeric value.)
  2. Given that the CEO has already confessed a strong preference for quantitative data and high sample sizes, it is (based on my experience) unlikely that they will truly consider the qual and quant data in concert. Instead, they will toss the valid, valuable data from interviews and usability testing aside, and base their decision solely on the flawed survey data.

If you are faced with this dilemma and unable to convince the CEO of the validity and rigor of qualitative research using the traditional arguments, I recommend seeking out more sound ways of scratching the quant-loving CEO’s itch beyond surveys. Is there analytics data you can reference, for example? If possible, use triangulation to tell a consistent and cohesive story with your multiple sources of data, to avoid your stakeholder’s temptation to cherry-pick the data that supports their preexisting assumptions.

Myth 3: Surveys Avoid the Risk of Annoying or Offending Customers

I continue to be shocked by how many organizations have a rule similar to this one in place:

“You are forbidden from emailing, calling, interviewing, or conducting usability testing with our customers. Our customers are precious to us, and we do not want to risk annoying or accidentally offending them. You are permitted to send them survey invitations.”

It is puzzling to think that an emailed invitation to a survey is somehow more acceptable than an emailed invitation for an interview. And the suggestion that researchers will unintentionally upset a participant while running a user-research session shows a lack of confidence in the researchers' skill and competence.

Senior leadership should trust their hired researchers enough to let them interact with customers in different ways, while taking the necessary precautions to minimize the risk of harm. Researchers need the flexibility of a vast toolkit of methods in order to do their jobs well.

Good Reasons to Run a Survey

So, if all the above reasons for running a survey are bad, when is the right time to run a survey?

Consider the below visualization of user research methods.

Surveys are a quantitative, attitudinal research method.​​​​​​
Surveys are a quantitative, attitudinal research method.​​​​​​

The chart places 20 methods on a graph depicting 2 axes:

  • Is the research question behavioral (concerned with what people do) or is it attitudinal (concerned with what people say)?
  • Is the research question quantitative (dealing with how many or how much of something) or is it qualitative (dealing with why something occurs or how to fix something)?

Interestingly, surveys are floating in the lower right corner, all on their own. Amongst the typical research methods, surveys are, in fact, the only method that can be categorized as both quantitative and attitudinal.

This is great news for researchers, as it means it should be crystal-clear when an opportunity lends itself to surveys.

Example 1: Tax Filing Platform

Imagine your company has an app company that helps users file their taxes. Your stakeholders want to know the percentage of users who, after submitting their tax returns using your software, feel confident that they have done it correctly.

There are only 2 questions you need to ask:

  • Is the question quantitative or qualitative? In this case, the word “percentage” clearly indicates a quantitative research question. We will need a large sample to answer this question confidently.
  • Is the question behavioral or attitudinal? We want to know about “confidence.” Unfortunately, confidence is tricky to objectively observe, and we often instead rely upon self-reported scales of confidence—an attitudinal attribute that can be measured.

Based on our answers to the above questions, there can be no other method better suited to this research need than a survey.

Example 2: Ecommerce

Now, imagine you work for an ecommerce company, and the analytics team has noticed a large number of people abandoning their carts prior to checking out. They want you to figure out why this is happening.

Let’s ask our 2 questions again:

  • Is the question quantitative or qualitative? The word “why” here is a clue. “Why” questions are best answered using qualitative methods.
  • Is the question behavioral or attitudinal? The abandonment of a cart is a behavior, and one that can be observed.

Here, we are left with a few options for qualitative, behavioral methods. Based on our specific context, it is likely that usability testing would fit our needs best.

Conclusion

When utilized appropriately, surveys offer several advantages over other research methods. But when misused, they can produce misleading or wrong data. Ensure research viability by using surveys to answer only research questions that are quantitative in nature and deal with attitudinal responses.

References

Hall, E. and Stark, K. (2019) Just enough research. New York, NY: A Book Apart.

Tags

UX reseach
UI design
UX process

Bài viết liên quan

Bài viết nổi bật