The User Research Strategist

The User Research Strategist

Studies without the meltdown

How To Run a Qualitative Usability Test

Asking Questions that Get You Good Data

Nikki Anderson's avatar
Nikki Anderson
Apr 02, 2026
∙ Paid

Hi, I’m Nikki. I run Drop In Research, where I help teams stop launching “meh” and start shipping what customers really need. I write about the conversations that change a roadmap, the questions that shake loose real insight, and the moves that get leadership leaning in. Bring me to your team.

Paid subscribers get the power tools: the UXR Tools Bundle with a full year of four top platforms free, plus all my Substack content, and a bangin’ Slack community where you can ask questions 24/7. Subscribe if you want your work to create change people can feel.


For me, quantitative usability testing was always super straightforward. I put a high-fidelity design or live product in front of someone and asked them to do certain tasks, which I then measured through metrics like task success, time on task, and surveys like the Single Ease Questionnaire.

There was very little room for asking qualitative-based questions or for introducing bias. We were there to truly understand the effectiveness, efficiency, and satisfaction of what we put in front of the participants. Straightforward. Easy, dare I say. In fact, I could even set up an unmoderated test to get even more participants.

However, I felt uncomfortable when it came to qualitative usability testing. I never seemed to be able to strike the right balance and constantly felt like I was asking leading and biased questions. I hated the standard questions like:

  • “What would you expect to see?”

  • “What do you think of this?”

  • “What would you change?”

  • “Explore the interface and tell me what you would do.”

I hated those questions because they were so hypothetical and future-based. I felt like I was asking the participant to develop ideas and design the website or app. The data I got from those questions was skewed and unhelpful.

Very rarely, if ever, as a user, do I sit on a website and think, “What am I expecting to see?” I can’t remember the last time I went to a website or app to explore the interface. And, although sometimes I do have opinions on websites/apps, my opinions likely wouldn’t be helpful or actionable to teams trying to make changes.

“This is dumb” is not a very actionable quote.


Below, I walk you through the full approach to make qualitative usability testing stop producing fluffy, hypothetical feedback and start giving your team clear direction on what to fix, what’s missing, and what’s confusing:

  • The “what qualitative usability testing really is” definition (why it’s not true usability testing, and where it sits between concept testing and quant usability testing)

  • The goal checklist that tells you when this method fits (the exact types of decisions and uncertainty it’s built for)

  • The TEDW-based question builder (how to replace “what do you think?” with prompts that pull real experiences, perceptions, and friction without leading)

  • The session structure you can reuse (warm-up, overarching scenario, screen-by-screen flow, and wrap-up questions that don’t turn into opinions-only noise)

  • The synthesis + activation workflow (deductive tags, affinity by screen, clustering patterns, and the built-in handoff into an ideation workshop so the work actually moves)

Exclusively for paid subscribers

User's avatar

Continue reading this post for free, courtesy of Nikki Anderson.

Or purchase a paid subscription.
© 2026 Nikki Anderson · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture