Rewriting and prioritizing user research questions
Your stakeholders have 99 questions, how to prioritize them ain't one
Hi, I’m Nikki. I run Drop In Research, where I help teams stop launching “meh” and start shipping what customers really need. I write about the conversations that change a roadmap, the questions that shake loose real insight, and the moves that get leadership leaning in. Bring me to your team.
Paid subscribers get the power tools: the UXR Tools Bundle with a full year of four top platforms free, plus all my Substack content, and a bangin’ Slack community where you can ask questions 24/7. Subscribe if you want your work to create change people can feel.
I remember a time when stakeholders started to get excited about user research. It was an interesting switch for me — I went from constantly checking in to identify user research projects within teams to colleagues coming to me with research project ideas in hand.
It👏🏻was👏🏻awesome👏🏻
I felt like research was exploding. I felt like I finally had a say. I felt like I finally had power.
But “with great power comes great responsibility” (Source: Uncle Ben + others).
And I quickly realized that, with these research projects, as exciting as they were, I started to feel extremely overwhelmed by them. It wasn’t necessarily the amount of projects (that would come later) but rather the number of questions people had within each research project.
The first time I encountered this was at my job at a social media management company. One of my stakeholders had an idea in mind for a concept they wanted to test. We had heard several times within previous research that the analytics on our platform were not aligning with users’ expectations and needs. In fact, they fell short in several key areas.
Some of the key pain points highlighted from previous research included:
We did not provide sufficient engagement analytics for our clients, inhibiting them from making data-driven decisions
Many clients were asking for manual reports from account managers as our platform isn’t providing sufficient metrics/data that allows them to compare data
Our current metrics had little context and weren’t very useful/reliable for our clients to make decisions
These were some pretty big flaws in our platform, rendering it an underutilized feature and, ultimately, creating more work for customers and our account managers.
So, with that in mind, my stakeholder came to me with a concept based on this previous research. I was thrilled. Not only had they listened to previous research, but they had used it as a jumping-off point for a concept! Hurrah!
And then I looked at the list of questions this stakeholder had that they wanted answered within the research project:
Do people understand the concept?
Do people like the concept?
Do people perceive our recommendations as trustworthy?
What types of comparison timelines do people prefer when it comes to analytics?
What kind of engagement metrics are most important to see?
How do people perceive the difference between engagement and interaction metrics?
Can people use the concept?
Would they like to use the concept to try it out?
Are people annoyed when they have to open a new window to compare data?
Is it clear how people navigate through the concept to get a monthly report?
😱😱😱😱😱😱😱
Not only were these a whole lotta questions, but they were also a lot of unideal-for-qualitative-user-research kinds of questions. There was no way we could answer so many questions in a 60-minute concept test, let alone get answers to the majority of these questions.
I went back to the stakeholder, terrified that I would disappoint them. I had just started the research ball rolling, and the last thing I wanted to do was say no to a research project or tell them that I couldn’t answer these types of questions.
Since I was still early on in my career, I had a tough time rewriting and narrowing down the scope of the questions. We went into the concept test with way too many yes/no and preference questions to answer.
This was one of the first projects that had landed on my desk from a stakeholder, and the results were a bit disappointing. Because the small sample sizes within qualitative user research aren’t ideal for answering yes/no questions (all the “do” and “are” questions), I didn’t have much impact.
Saying, “8 out of 12 people understood the concept,” was not powerful.
Similarly, saying “7 out of 12 people liked the concept” did not tell us anything. Many of the stares I got during my report said, “So what?” or “What now?”
I was gutted (my new British slang). That wasn’t the first or the last time I received a research project with a slew of questions that were either impossible to answer with user research or were way too broad in scope.
If you’ve ever been handed a 20-question laundry list and told “can you just test this,” the next part is your escape. Paid subscribers get the full system I use to:
Turn “do/can/which/are” questions into interview questions that actually produce stories
Separate what research can answer vs what needs A/B testing, analytics, or a survey
Cut scope fast without disappointing the stakeholder
Run a clean prioritization meeting that ends with a short, usable question set
Use the upgraded spreadsheet (columns, definitions, scoring) to make decisions in real time
Spot repeated questions across teams and turn them into strategic research themes
Walk through the full example (your analytics concept test) with a finished sheet you can copy
Exclusively for paid subscribers


