The User Research Strategist

The User Research Strategist

Execution

The Subjectivity of Surveys

A playbook for running surveys that actually inform decisions, including traps, safe tools, and a ready-to-use checklist

Nikki Anderson's avatar
Nikki Anderson
Dec 04, 2025
∙ Paid

Hi, I’m Nikki. I run Drop In Research, where I help teams stop launching “meh” and start shipping what customers really need. I write about the conversations that change a roadmap, the questions that shake loose real insight, and the moves that get leadership leaning in. Bring me to your team.

Paid subscribers get the power tools: the UXR Tools Bundle with a full year of four top platforms free, plus all my Substack content, and a bangin’ Slack community where you can ask questions 24/7. Subscribe if you want your work to create change people can feel.


Hello curious human,

The number of times I’ve had someone come to me and say “let’s just run a survey on this” is almost as many times as I’ve run a survey without enough thought or intention.

Surveys are powerful little tools but, unfortunately, they can be seen as a extremely simple method to get answers quickly to something. And the reason I say unfortunately is because, a lot of the time, we are expecting way too much from surveys (kinda like personas).

Because they appear so simple on the surface, they can get very misused and yield very misguided results. I remember running surveys without clearly understanding what I was doing and interpreted them very incorrectly, leading to some poor decisions being mad.

And it’s not the fault of surveys but how we use them. So, in this article, I will talk through the subjectivity of surveys and how we can use them more appropriately (and also why I’ve started to send fewer surveys in my research projects).

PS. If you want to have a laugh…

The trap of simplicity

Surveys look easy. That’s the problem.

When a stakeholder says, “Let’s just throw it in a survey,” what they really mean is, “I don’t want to slow down and think too hard about this.” And sometimes, if we’re honest, we mean the same thing.

Surveys feel like free money. You spin up a form, blast it to a panel, and within 48 hours you’ve got hundreds of responses neatly laid out in rows and columns. It feels efficient. It feels like research. It feels like progress.

But surveys are the UX equivalent of instant noodles: quick, cheap, and not nearly as nourishing as you pretend they are.

I’ve been guilty of this. I once ran a survey with 20 questions on feature adoption. I asked things like “How often do you use X?” and “How satisfied are you with Y?” I thought I was being thorough. When the responses came in, I did what many of us do: I sorted by the biggest numbers and slapped them into a slide deck.

It looked impressive. Until someone asked me, “So what does this mean for what we build next?”

I froze. I didn’t know. The data was shallow, the questions were vague, and the only real conclusion I could draw was that people clicked boxes when asked.

That’s the trap of simplicity: because surveys are easy to spin up, we stop being critical about whether they’re the right method. We start to believe that any survey = insight. That’s not how this works.

Stop surveys from becoming your lazy default:

  1. Start with the question behind the question. Don’t ask, “What should go in the survey?” Ask, “What do we actually need to know, and can a survey realistically tell us that?” If what you need is why, a survey is already the wrong tool.

  2. Set a higher bar. Pretend you have to defend the survey in front of your toughest stakeholder. If all you can say is, “Well, it’s fast,” you’re not ready. You should be able to say, “We’re asking these three questions because they will help us decide between option A and option B.”

  3. Make surveys the last resort, not the first instinct. Treat surveys like antibiotics: useful when needed, harmful if overprescribed. Ask yourself: could a quick interview, usability test, or even an analytics check answer this faster and better?

A quick self-check template:

Before you send out a single survey link, run it through this filter:

  • What is the decision we’re trying to make?

  • Can a survey actually give us the evidence to make it?

  • What’s the risk if we’re wrong?

  • Have we considered another method that might be stronger?

If you can’t answer these, don’t send the survey.

Expecting Surveys to Do the Wrong Job

Surveys often get hired to do jobs they’re simply not qualified for.

I’ve seen teams use them to try to explain churn, validate a new feature idea, or predict future behavior. On the surface, it makes sense: you have a question, you send it out to hundreds of people, and you get a spreadsheet of answers. Fast, neat, and clean.

Except it rarely gives you what you’re actually looking for.

At one company, we were struggling with churn. Customers were leaving at a higher rate than expected, and leadership wanted answers. The first reaction was, “Let’s run a survey asking people why they left.”

So, we did.

Thousands of churned users received the survey, and the top two responses came back as:

  • “Too expensive”

  • “Didn’t need it anymore”

Leadership latched onto those answers immediately. The conversation turned to pricing experiments. Maybe we should lower costs, try tiering, or roll out discounts.

But then we did follow-up interviews with some of those same customers. And the story changed. Pricing wasn’t really the problem. The real issue was that people didn’t see enough value in the product to justify the price at any level. “Too expensive” was an easy checkbox to click when you didn’t want to type out, “Your product never became essential in my workflow.”

If we had stopped at the survey, the company would have made sweeping pricing changes that wouldn’t have fixed the underlying problem and would probably have hurt revenue even more.

That’s the trap of overpromising: expecting surveys to reveal motivations, predict behavior, or validate desirability. These are jobs surveys cannot do well. They’ll give you surface-level answers, but not the depth you need to make confident decisions.

The mismatch between questions and methods

Some of the most common misuses I see:

  • Validating desirability. Asking “Would you use this feature?” is nearly guaranteed to give you a false positive. People like the idea of a feature in theory. Their actual behavior is another story.

  • Predicting behavior. “How often will you use this in the future?” No one can predict this accurately. People overestimate their future selves and underestimate friction.

  • Explaining motivation. A five-point satisfaction scale tells you nothing about why someone is dissatisfied. It only confirms they are.

A simple way to check if a survey is the right tool

When you’re tempted to launch a survey, pause and ask:

  • Do I need counts? (How many, how often, what proportion?)

  • Do I need context? (Why, how, under what conditions?)

  • Do I need behavior? (What actually happened?)

Here’s the shorthand I use with teams:

  • Surveys → Counts

  • Interviews → Context

  • Analytics → Behavior

It’s not that surveys are “bad.” They just have a very narrow set of jobs they’re good at. The problem comes when we stretch them outside that boundary.

Try this with your own surveys

Think about the last survey your team ran. Open it up, look at the first three questions, and classify them into one of the three buckets above: counts, context, or behavior.

  • If the question belongs in counts → good, keep it.

  • If it belongs in context → wrong tool. That’s an interview.

  • If it belongs in behavior → wrong tool. That’s analytics or logs.

This one exercise helps you spot overpromising quickly. It also gives you a way to push back when someone says, “Let’s just send out a survey.” You’re not saying no to the survey but instead showing the mismatch between the question and the method.

Subjectivity in Design

One of the biggest lies about surveys is that they’re “objective.” Put the same questions in front of hundreds of people, tally up the results, and voilà, you’ve got hard numbers you can trust.

Except every single piece of a survey is subjective.

  • The way the question is phrased

  • The scale you choose

  • The assumptions baked into the wording

  • The order you ask things

All of that nudges people toward certain answers. And those nudges add up fast.

Loaded wording

I once reviewed a survey that asked:

“How helpful was Feature X for you?”

Notice the trap? The question assumes Feature X was helpful in the first place. Someone who didn’t find it helpful still has to pick from a scale that presumes it did something good. They’ll either click the lowest option or skip it, but the framing has already tilted the results.

A stronger, less biased version would be:

“What was your experience with Feature X?”

That leaves the door open for positive, negative, or neutral answers.

Scale design traps

Scales look objective, but they carry a lot of hidden bias.

  • A five-point scale pushes people to the middle.

  • A seven-point scale spreads responses but can be overwhelming.

  • A scale that labels only the end points (“Not at all useful” → “Extremely useful”) leaves the middle up to interpretation.

I’ve seen teams argue for hours about whether to use five, seven, or ten points. The truth is: none of them are neutral. You’re shaping the outcome no matter which one you pick. The key is to choose a scale, stick with it over time, and be clear about how you’ll interpret it.

Leading the witness

It’s surprisingly easy to write a survey that accidentally tells people what you want to hear.

Take this question:

“How much do you agree that Feature Y saves you time?”

That’s a leading question. You’ve already told participants that the feature saves time. Even the “disagree” option is reacting to your statement.

A better version:

“If you use Feature Y, how does it impact the time it takes to complete your task?”

Now people can say it saves time, wastes time, or makes no difference. The data is richer, and you haven’t pushed them in a direction.

How to keep yourself honest

Before you send a survey, do a quick pilot run. Ask three to five people (inside or outside your company) to read the questions and then tell you, in their own words, what they think you’re asking.

If what they say back doesn’t match what you intended, your survey is carrying bias. Rewrite until the intention and the interpretation line up.

I use a quick checklist before finalizing any survey:

  • Does this question assume a positive or negative experience?

  • Is this scale forcing people toward a certain answer?

  • Could two different people interpret this question in completely different ways?

  • Is there any jargon or insider language that someone outside the company wouldn’t get?

Surveys may look like hard numbers, but every decision we make in design turns the dial a little bit. Pretending surveys are neutral only makes the problem worse. A better approach is to admit that subjectivity exists and manage it intentionally.

Subjectivity in Sampling

Even if you write flawless survey questions, the answers still come from people. And who those people are matters more than anything else.

Surveys often get treated like magic spells: send them out, get hundreds of responses, trust the numbers. But those numbers only reflect the people who happened to answer. And if those people don’t represent the group you’re actually trying to understand, the data doesn’t mean much.

The “easy list” problem

At one company, we ran a survey about feature adoption. We wanted to know how new customers were experiencing onboarding. Sounds simple enough.

Except the survey didn’t go to new customers. It went to the email list we already had set up, the same list that included mostly long-term, power users.

Guess what happened? The responses were glowing. People loved the onboarding. They said it was clear and simple.

But those responses weren’t from the people struggling with onboarding. They were from the people who had already survived it. By surveying the “easy list,” we ended up patting ourselves on the back instead of fixing real problems.

Convenience samples sneak in everywhere

It’s not just email lists. Panels, social media links, and even intercept surveys on your product tend to pull in whoever is easiest to reach. And “easy to reach” often means a very specific type of user: the vocal ones, the loyal ones, the ones with time on their hands.

If you’re not careful, those groups become your de facto research audience. You start designing for the squeaky wheels instead of the quiet majority.

A quick gut check for representativeness

When I look at a survey, I ask myself three questions before I trust the numbers:

  1. Who did we want to hear from? Be clear on the audience before you send anything. “All customers” is rarely specific enough. Do you need churned users, new users, enterprise accounts, first-time buyers? Define it.

  2. Who actually responded? Look at the demographics, tenure, roles, or usage patterns of your sample. Does it match what you need? Or are you just hearing from the same 20% who always respond?

  3. What’s the gap, and how risky is it? If your target was new customers but 80% of respondents were long-time power users, that’s a big gap. If you were aiming for a mix of roles but only designers responded, that’s another red flag.

Turning subjectivity into clarity

You can’t always get the perfect sample. That’s the reality of survey research. But you can at least be explicit about who you actually heard from.

Instead of saying, “Customers told us…” in a readout, frame it as:

“We heard from 250 long-term users who have been with the product for 12+ months. Their perspective is valuable, but we’re still missing newer users.”

That simple shift in framing makes the subjectivity visible. It also makes it harder for stakeholders to run off and overgeneralize the results.

Surveys will always be shaped by who shows up. You have to be honest about the lens you’re looking through, and making sure your team understands the limitations.

Subjectivity in Interpretation

You’ve crafted careful questions. You’ve checked your sample. You’ve got a tidy spreadsheet of results.

Now comes the most subjective part of all: interpreting the data.

Numbers don’t actually speak for themselves. We speak for them. And that’s where things can go sideways fast.

The story we want to hear

At one company, we ran a customer satisfaction survey after a redesign. The scores were decent with most people hovering around neutral with a few leaning positive.

When the PM presented the results, she said:

“Look, 60% of people rated the new design positively.”

Technically true. But she left out the fact that nearly 40% were neutral or negative. By spotlighting only the positive slice, the survey became a pat on the back instead of a warning sign.

This isn’t always intentional. We all fall into confirmation bias: noticing the numbers that support our hopes and quietly ignoring the rest. But in stakeholder-heavy environments, that bias can shape entire roadmaps.

The “average” trap

Another common pitfall: reporting averages.

Let’s say your survey asked people to rate ease of use on a 1–7 scale. The average score comes back as 4.8.

On paper, that looks fine. Not stellar, not terrible. Middle of the road.

But if you dig into the distribution, you might find two very different groups:

  • Half of users rated it a 7 (very easy).

  • The other half rated it a 2 (very hard).

Averaging those together hides the fact that you’ve got a split audience: one group sailing through, the other drowning. If you report just the average, you’ll completely miss that divide.

Numbers without context are dangerous

The temptation is to grab the highest percentage and headline it. “80% of users agree Feature Z is useful.”

But that number only means something if you put it next to context:

  • Who said it?

  • How many skipped the question?

  • What does “useful” actually mean in their workflow?

Without that framing, you’re just throwing big numbers around. And big numbers are seductive to stakeholders who want quick validation.

How to keep interpretation honest

Before you share survey results, run through these steps:

  1. Show the full distribution. Don’t just report averages. Show the range of responses so splits and outliers are visible.

  2. Pair numbers with open comments. If you asked an open-ended follow-up, bring those voices in. They keep the numbers grounded in reality.

  3. Acknowledge uncertainty. Frame your insights as “This suggests…” instead of “This proves…” Surveys are directional, not definitive.

  4. Triangulate with other data. Always check: does this align with what we’re hearing in interviews or seeing in analytics? If not, call out the tension instead of smoothing it over.

Surveys give you patterns. It’s our job to interpret those patterns responsibly and admit where the limits are. Otherwise, we risk turning numbers into a comforting story instead of a useful one.

When Surveys Are the Wrong Tool

Keep reading with a 7-day free trial

Subscribe to The User Research Strategist to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Nikki Anderson · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture