Hi, I’m Nikki. I run Drop In Research, where I help teams stop launching “meh” and start shipping what customers really need. I write about the conversations that change a roadmap, the questions that shake loose real insight, and the moves that get leadership leaning in. Bring me to your team.
Paid subscribers get the power tools: the UXR Tools Bundle with a full year of four top platforms free, plus all my Substack content, and a bangin’ Slack community where you can ask questions 24/7. Subscribe if you want your work to create change people can feel.
Most researchers I know have the same story. The same raised eyebrow. The same deep sigh before they open another LLM convo and brace for disappointment.
The first time I tried to use AI for something “simple,” I remember asking it to help me dissect a stakeholder brief. I pasted the vague, chaotic message, something along the lines of “We need to test the new dashboard design before Friday; can you put something together?” and waited for help.
What I got back looked like someone had skimmed a UX blog from 2012 and stitched together a few polite sentences. It read like a student trying to impress their professor without actually doing the assignment. No context. No understanding of the politics behind the request. No reading between the lines. Definitely no sense of the real decision hiding underneath the pretty words.
The problem is that we’ve been throwing AI at UXR tasks with the same energy we bring to reheating leftover lunch. Fast. Distracted. Half-formed prompts that barely capture what we actually need. Then we blame the AI when it hands back something flat, vague, or straight-up wrong.
Most UXRs are stuck in this loop:
You give AI a tiny prompt.
It gives you a tiny answer.
You rewrite everything yourself.
You decide AI isn’t ready.
Then you go back to doing everything the slow way.
But, at the same time, you really have been burned.
You’ve tried using AI to:
Clean up messy notes
Summarize a long research plan
Rephrase an insight for an exec
Draft a kickoff email
Clarify a brief written by a PM who sprinted through it between meetings
And the output felt like it came from someone who wasn’t in the room with you.
I’ve spoken to senior UXRs in fintech, SaaS, marketplaces, health tech, people who run teams, shape roadmaps, and handle cross-functional chaos every day, and every single one of them says something like:
“I can see the potential…but I don’t trust it.”
Not because AI is bad.
But because nobody taught UXRs how to use AI in a way that respects the complexity of our work. We didn’t get training.
We’re self-teaching in the middle of deadlines. We’re experimenting with prompts in between interviews. We’re trying to make sense of output that feels helpful one minute and deeply misguided the next.
AI becomes incredibly powerful for researchers once you give it the kind of direction your craft already relies on which is precision, context, constraints, intention, and the decision you’re supporting.
The magic doesn’t come from the model. The magic comes from your brain, paired with a structure that helps the AI act like a competent partner instead of an overeager intern.
Most researchers give up after a few half-hearted prompts. You ask something generic. It spits out something shallow. You move on. It’s not that AI can’t help you think better, it can, but only if you know how to push it.
Now we’re going to walk through the entire research process, from messy stakeholder kickoff to crisp, confident insights, and turn AI into the kind of co-pilot you’ve wished for since your first week as a researcher.
Why Pancake Prompts Fall Flat
If you’ve ever asked AI for help and felt mildly offended by the output, you’re not alone. Most researchers start with tiny prompts, get tiny answers, and assume the model just isn’t good enough. It’s the same energy as handing someone a sticky note that says write the whole report for Monday? and expecting them to read your mind, decode your org politics, and magically land on something useful.
The problem isn’t the AI. The problem is the prompt.
I know that sounds like the kind of patronizing advice thrown around LinkedIn, but stay with me. I spent months testing how UXRs actually prompt AI across dozens of real projects, interviews, surveys, strategy sessions, prototype tests, you name it, and most of the prompts UXRs write fall into the same patterns:
1. The “do everything for me” prompt
Example: “Write a usability test.”
What the AI hears: “Guess wildly.”
2. The “here’s a crumb, bake a cake” prompt
Example: “Help me write a kickoff doc.”
What the AI hears: “Please hallucinate intentions for me.”
3. The “I’ll tell you the task but not the stakes” prompt
Example: “Suggest tasks for a survey.”
What the AI hears: “Throw generic content at the wall.”
When you feed AI a prompt that thin, you get output that reads like the UX equivalent of a cookbook written by someone who has never eaten food. Lacking any awareness of real-world messiness.
AI has no idea what you actually care about unless you tell it.
And UX research is built on a whole lot of context:
Why the team wants this research
What decision sits behind the request
Who’s pushing for speed
What’s riding on the outcome
What happened last time someone skipped research
Who will use the insights
What the constraints look like
Which trade-offs matter
What business metric is at stake
How much is already known
What’s being assumed without evidence
When your prompt doesn’t include these pieces, you’re asking an AI model to work blindfolded. I started experimenting with a completely different approach: stop treating AI like a vending machine, and start treating it like a very fast, very literal junior researcher who needs a real brief.
This is where the FAST model came from. A simple four-part structure that upgrades almost any prompt instantly.
Below, I walk you through the exact system that turns AI from “overeager intern” into a reliable research co-pilot:
The FAST model (the 4-part prompt structure that fixes pancake prompts instantly)
Before/after examples that show what “good” looks like
Copy-paste prompts for kickoff, decision-mapping, risk surfacing, and assumption-breaking
Mid-study checkpoint prompts to stop projects drifting off a cliff
Synthesis guardrails so you get support without handing over judgment or raw data
Exclusively for paid subscribers.
The FAST Model
Focus: What do you actually need help with? Not the task label. The actual need. Example: not “write a usability test,” but “help me identify risky friction points in this pricing page redesign.”
Audience: Who should the AI act like? This alone changes output quality dramatically. Example: “Act like a product-strategy UXR working in a SaaS growth team.”
Situation: What’s the product and business context? This is the missing piece in almost every UXR prompt. Example: “We’re redesigning the pricing page because conversions dipped 14% after the last release. We need clarity on what stopped users from upgrading.”
Target Output: What do you want the AI to produce? Not “help me with this.” Give it the format, the level, the framing. Example: “Give me three task scenarios, two follow-up questions per task, and one think-aloud instruction, all written for remote, unmoderated testing.”
How FAST Transforms a Useless Prompt
Let me show you the difference by revisiting the onboarding disaster prompt.
The Pancake Prompt
“Can you help me write a research plan?”
No wonder the AI panicked.
The FAST Prompt
“Act like a senior UX researcher. I’m scoping a lean study for a new onboarding flow. The team is nervous because sign-up completion dropped by 28% after a recent change. We need to understand where friction appears, what triggers hesitation, and which parts create unnecessary cognitive load. Give me:
A narrowed problem statement
Three research goals tied to conversion
A lean method that fits a 5-day timeline
A list of missing assumptions we should pressure-test
Three questions I can ask the PM to clarify the real decision this work supports”
You feel the difference immediately. One is a shrug. The other is a strategic brief.
A Quick Before/After
Bad Prompt
“Help me write a survey.”
FAST Prompt
“Act like a research strategist experienced in B2B SaaS. I’m designing a survey for customers who downgraded or cancelled in the past 90 days. Our product team needs to understand which friction points contribute most to churn. Provide:
A problem statement focused on value perception
A list of measurable behavioral indicators we can test
Five survey questions written for accuracy, not politeness
Red flags that would signal deeper qualitative follow-up
Two blind spots the PM might be missing”
This version doesn’t just ask for questions. It asks for thinking and thinking is the part researchers shouldn’t have to do alone when AI can support it reliably.
Everything that comes next in this article, from scoping, to method selection, to synthesis, to insight writing, relies on one foundational skill: Teaching AI how to think with you, not at you.
FAST is the starting point. It’s the way you get past pancake prompts. It’s how you unlock actual partnership.
Lets dive in.
Framing, Alignment, and Dragging the Real Decision Into the Light
Most UXRs don’t realize that the beginning of a project is where AI is the strongest, not at the end, not during synthesis, not when you’re drowning in insights.
This is where teams hide their biggest fears, requests come half-baked, assumptions snowball into scope creep, you get blamed later for problems you didn’t create.
So the prompts in this section aren’t for “getting help,” but for giving yourself x-ray vision.
You can copy and paste the following prompts into your LLM of choice.
Prompt 1: Decision-Mapping Prompt
When to use it
Use this the moment a stakeholder sends you something like:
Can you look into this?
We need research before launch
Quick question about the new flow…
Any request where you can feel the tension but no one is naming the decision underneath.
Most research fails not because of poor execution, but because it is hard to understand exactly what decision we are trying to support. This prompt drags that decision out of the shadows and shows you what problem you’re actually solving.
Prompt: Decision Mapping + Risk Surfacing + Alignment Summary
“I’m kicking off research for: [insert stakeholder request verbatim].
Help me uncover the real decision behind this request by walking through the following steps:
1. Extract the surface request
What does the stakeholder explicitly say they want?
2. Identify possible underlying decisions
List 3-5 potential decisions this research might influence.
Include strategic decisions (long-term), tactical decisions (short-term), and political decisions (stakeholder confidence, team reputation).
3. Map the stakes
For each potential decision:
What might happen if the team moves forward without research?
What might happen if they choose the wrong path?
Which metrics or teams carry the most risk?
4. Identify blind spots
What context is missing from the stakeholder message?
What assumptions seem baked in?
5. Output a one-sentence alignment summary
Write a direct sentence that captures the real purpose of this research.
6. Suggest clarifying questions
Give me 5-7 questions I can ask to confirm alignment, scope, risk, and expectations.”
Follow-up prompts
Rewrite the alignment summary for an exec, a PM, and a senior designer
What political tensions might be shaping this request?
What evidence would help the team make a confident decision?
Prompt 2: Risk-Mapping Prompt
When to use it
Use this when:
Someone proposes a change that touches conversion, billing, onboarding, or anything revenue-adjacent.
A team is rushing toward a release.
A stakeholder says, “It’s a small change, we don’t need research.”
How to get the best output
Be extremely clear about the product decision. Don’t write “new settings page redesign.” Write “removing the confirmation step from the billing process.”
Prompt: Risk Map + Research Questions + Impact Chain
“I’m exploring a product decision: [insert decision].
Help me create a clear risk map:
1. List overlooked risks
Provide 6-8 risks across user behavior, product experience, support load, technical constraints, trust, and business impact.
2. Tie each risk to an insight gap
For every risk, list the missing knowledge that makes this risky.
3. Create research questions
Turn the insight gaps into research questions written in natural language.
4. Map the product or business impact
For every risk:
Which metric might shift?
Which team would carry the impact?
What early signals would reveal the risk in the wild?
5. Prioritize
Rank the risks by impact and urgency with a short justification for each.”
Follow-up prompts
Turn the top risks into testable hypotheses
Draft a 5-bullet stakeholder slide summarizing these risks
Recommend the fastest research plan for the top two risks
Prompt 3: Assumption Breaker Prompt
When to use it
Perfect when someone confidently announces:
Users prefer X
People always do Y
Everyone hates Z
Or when an exec repeats a belief that has never lived outside their head. Assumptions drive product decisions more than data does. You need a clean way to surface them without escalating tension.
Prompt: Assumption Analysis + Hypothesis Rewrite + Validation Plan
“The team holds this assumption: [paste assumption].
Help me break it down:
1. Analyze the assumption
What belief, fear, or past experience might be influencing this assumption?
What makes this belief unreliable?
2. Articulate risks
What failures or setbacks could happen if the assumption is wrong?
Which user behaviors or business outcomes could be impacted?
3. Reframe into hypotheses
Write 3-5 testable hypotheses that describe observable user behavior.
4. Create lean research questions
Draft 3 questions that would help evaluate or challenge the hypotheses.
5. Suggest next steps
Provide 3 lean testing approaches suitable for tight timelines or limited access.”
Follow-up prompts
Turn these hypotheses into a 5-user micro-study plan
Draft a Slack message to the PM demonstrating why we need evidence
Prompt 4: Vague Request → Actionable Brief Prompt
This next part is for paid subscribers (thank you to all my paid subscribers for your support — you allow me to keep writing this kind of content!)
The rest of the article goes into the complete method: the framing system, the advanced prompts, the research workflows, and the tools that help you use AI with the same precision you bring to your own craft. If you want the full system, keep reading.
When to use it
Any time a stakeholder gives you:
A request so vague you feel like you’re solving a riddle
A request driven by panic, not clarity
A “quick research?” message that actually requires strategy
Prompt: Stakeholder Request → Strategic Brief
“I received this stakeholder request: [paste message].
Convert this into a clear, actionable brief:
1. Extract intent
What is the stakeholder trying to understand or achieve?
2. Identify urgency
What pressures might be shaping their request?
3. Map assumptions
List assumptions hidden in the message.
4. Translate into research goals
Write 2-4 user-centered goals tied to real behaviors or perceptions.
5. Draft clarifying questions
Give 5-7 questions to confirm alignment and scope.
6. Output a project summary
Write a one-paragraph brief I can send back to stakeholders.”
Follow-up prompts
Rewrite this as a concise email
Rewrite for a designer
Highlight where stakeholder goals conflict with each other
Prompt 5: Business Goal → Research Goal Prompt
When to use it
Use this when the business goal is clear, but the user angle is not. For example:
Increase activation
Reduce churn
Lift free-to-paid
It’s great for aligning research with leadership expectations because many UXRs struggle to convert business goals into human-centered goals without watering them down. This prompt creates the bridge.
Prompt: Business Goal → User-Centered Research Direction
“The business goal is: [insert goal].
Help me translate this into a research direction:
1. Identify the user problem
What user behaviors, beliefs, or barriers might connect to the business goal?
2. Generate hypotheses
List 5-7 hypotheses about what might be blocking users.
3. Create research goals
Turn the business goal + hypotheses into 3-4 user-centered research goals.
4. Connect research to outcomes
For each goal, specify:
Which outcome it ties to
Which metric might shift
Who depends on this insight
5. Write an alignment statement
Create a one-sentence statement that links user needs to the business goal.”
Follow-up prompts
Turn this into a kickoff slide
Draft a short alignment message to the PM
Prompt 6: Lean Research Approach Prompt
When to use it
Perfect for situations where you need research, but:
timeline is brutal
design isn’t ready
team is anxious
scope is too big
PM wants answers “this week”
How to get the best output
Explain constraints clearly.
Don’t just say “tight timeline.”
Say “5 days, no prototype, limited access to users.”
Prompt: Lean Method Builder
“We have limited time and resources. Our focus is: [insert goal].
Help me build a lean yet reliable research approach:
1. Assess constraints
List the constraints that matter most.
2. Recommend methods
Propose 2 lean approaches with short explanations for why they fit.
3. Outline trade-offs
For each method:
What we gain
What we lose
What we must watch out for
4. Draft a mini-plan
Include:
Goals
Method
Participants
Timeline
Deliverables
Decisions this plan supports”
Follow-up prompts
Turn this into a Slack update for the team
Draft a script for the lean sessions
How to Avoid the “Test the Thing” Trap
You know this dance. A PM pings you:
Can we test the new onboarding?
And your first thought is:
Test… what exactly? And for whom? And for what decision? And is this even ready? And why do I feel like I’m walking into a trap?
Most researchers don’t struggle with method knowledge. They struggle with:
messy requests
unclear goals
rushed timelines
mismatched expectations
teams who want proof, not insight
and the pressure to deliver something quickly
The following prompts are the strategic scaffolding you need when you’re under pressure and everyone else is guessing.
Prompt 1: The “Turn This Mess Into a Scope” Prompt
When to use it
Any time someone asks for:
a test
a quick study
validation
feedback
or any phrasing that makes your stomach tighten because it sounds like a task with zero definition. This prompt stops you from reacting to the shape of the request and helps you uncover the logic behind it. It forces AI to behave like a senior partner who can deconstruct the ask from every angle before you commit to a scope.
Prompt: From Vague Request → Precise Scope
“I received this stakeholder request: [paste message verbatim].
Help me turn this into a clear, focused scope by walking through the following:
1. Extract intent
What is the person trying to learn, understand, or fix?
2. Identify the real decision
What decision does this request actually support?
List 3 possible decisions if unclear.
3. Define the insight boundaries
Based on the decision, what belongs inside the research scope?
What belongs outside?
4. Translate into research goals
Rewrite the request into 2-3 sharp, user-centered research goals tied to behaviors, perceptions, or decisions.
5. Clarifying questions
Draft 5-7 questions I can ask the stakeholder to confirm alignment, scope, expectations, and decision paths.
6. Draft a scope summary
Produce a concise paragraph that I can send back to confirm what the research will cover, who it serves, and which decision it supports.”
Follow-ups
Rewrite the scope summary in Slack format
Highlight any hidden tensions or conflicts in the stakeholder’s request
Create a version of the scope summary tailored for design vs product
Prompt 2: The “Choose the Right Method for the Decision” Prompt
When to use it
When a team:
expects a method before they’ve clarified the problem
wants usability testing but the decision is strategic
wants a survey but the question is exploratory
and when you know the wrong method will sabotage the whole thing.
Prompt: Decision → Method Recommendation
“We’re making a decision about: [insert decision].
Help me select the right research method by completing the steps below:
1. Identify what evidence the team needs
Describe the user behaviors, perceptions, or signals required to support or challenge this decision.
2. Map evidence → method
For each evidence need, list the research methods that can uncover it, and explain the fit.
3. Recommend the best method
Propose the method that aligns most closely with the decision and the risk level, with a short explanation of why this method works.
4. Define limitations
What limitations come with this method?
Where do I need to be cautious?
5. Draft a justification
Write a short explanation of this method choice that I can use with PMs and designers.”
Follow-ups
Rewrite the justification in a way that addresses exec concerns
Give me a hybrid approach if we need something faster
Prompt 3: The “Prioritize What Actually Matters” Prompt
When to use it
When stakeholders hand you more goals than you can cover
When everyone says every question is urgent
When you feel pressure to include everything
When scope creep is lurking
Prompt: Prioritize Research Goals Based on Value + Risk
“Here are the initial research goals: [paste list].
Help me prioritize them based on what will deliver the most value.
1. Map each goal to:
The decision it supports
The metric or outcome that might shift
The stakeholder who depends on it
2. Rank by business value
Sort goals by impact, urgency, and usefulness for decision-making.
3. Rewrite the top goals
Rewrite the top 1-2 goals so they’re sharp, outcome-focused, and realistic within our timeline.
4. Suggest what to drop
Identify which goals should be paused, removed, or reframed.
5. Draft a stakeholder explanation
Write a short message that explains this prioritization clearly and confidently.”
Follow-ups
Rewrite the justification for a PM who tends to push back
Highlight any assumptions behind lower-priority goals
Prompt 4: The “Lean Plan With Rigor” Prompt
When to use it
When you need research fast, but still need to defend quality
When someone wants answers before Friday
When you have no prototype, limited access to users, or half the team is out
List your constraints clearly, including timelines, resources, participant access, design readiness, political sensitivity.
Prompt: Build a Lean Research Plan That Still Holds Up
“We need a lean research approach. Our goal is: [insert goal].
Here are our constraints: [list constraints].
Help me create a lean but reliable plan:
1. Diagnose the constraints
Describe how each constraint shapes what’s possible.
2. Recommend lean methods
Propose 2 lean approaches that can still reveal real insight.
Explain why each fits.
3. Outline trade-offs
For each method:
What we gain
What we lose
Where we need to be careful
4. Draft a mini-plan
Write a brief plan that includes:
Goals
Method
Participants
Timeline
Deliverables
The decision this supports
5. Draft a stakeholder explanation
Write a short message that helps the team understand why this is the right approach given the constraints.”
Follow-ups
Turn the mini-plan into a Slack update
Draft a script for the lean sessions
Highlight anything that still feels risky before we start
Mid-Project Checkpoints
Every experienced researcher knows the midpoint of a project is where things get wobbly. You’ve run a few sessions, seen early patterns, and you’re spotting gaps in the plan.
A stakeholder is suddenly pushing for faster answers, adesigner has “improved” the prototype mid-study, and someone wants to add new questions halfway through.
You’re thinking:
I need to recalibrate before this veers off a cliff.
Mid-project checkpoints rarely get talked about because they happen quietly, often at 10pm or in between meetings. This is where AI shines at helping you surface blind spots, pressure-test interpretations, and make your reasoning clearer and sharper.
Prompt 1: The Mid-Study Insight QA Prompt
When to use it
Use this when you’re halfway through sessions and starting to see patterns — but you’re worried you’re jumping to conclusions, missing nuance, or interpreting behavior too quickly. Mid-study overconfidence is one of the easiest ways to steer a project into the ditch. This prompt acts as a brake — slowing you down just enough to sanity-check your thinking.
Prompt: Insight QA + Risk Check + Missing Evidence Scan
“I’ve run [X] sessions in a study focused on [topic].
Here are the early patterns I’m noticing:
[paste raw insight bullets].
Help me sanity-check these emerging insights:
1. Identify possible misinterpretations
Where might I be overinterpreting, oversimplifying, or assigning intent to users without enough data?
2. Spot missing evidence
List the user behaviors, quotes, patterns, or data points we haven’t gathered yet that are needed to validate or challenge these insights.
3. Pressure-test alternative explanations
Suggest 3-5 alternative explanations for the patterns I’m seeing.
4. Identify risks of premature conclusions
What risks arise if the team acts on these early patterns too quickly?
5. Suggest adjustments
Recommend adjustments for the next round of sessions to strengthen evidence quality.”
Follow-ups
Draft a short internal update summarizing what’s still uncertain
Suggest which participants we should target next to fill the gaps
Show where bias might be creeping in
Prompt 2: The Hypothesis Stress-Test Prompt
When to use it
Use this once you’ve formed hypotheses based on early evidence and need to check whether your logic holds up before presenting anything. Hypotheses formed mid-study can be brilliant or wildly off-base. You need a way to test the durability of your thinking before it reaches the team.
Prompt: Hypothesis Pressure-Test + Gaps + Failure Modes
“These are the working hypotheses I’m forming based on early sessions:
[paste hypotheses].
Help me stress-test them:
1. Identify weak points
Where does each hypothesis feel fragile or under-supported?
2. Identify what data would strengthen or weaken each hypothesis
List the behaviors, quotes, or evidence needed for validation.
3. List alternative hypotheses
Suggest 3-5 rival hypotheses that could also explain the patterns.
4. Identify failure modes
If my hypothesis is wrong, what would the real pattern likely be?
5. Recommend next steps
Suggest what I should prioritize in upcoming sessions to validate or challenge these hypotheses.”
Follow-ups
Rewrite each hypothesis to be more testable
Highlight where I might be anchoring too early
Suggest questions to add to the next few sessions to pressure-test this thinking
Prompt 3: The Friction Spotting Prompt
When to use it
Use this when you’re reviewing session recordings or notes and something feels off like a moment where users hesitate, struggle, or expect something else. This helps you articulate the friction with precision.
Prompt: Friction Breakdown + Root Causes + Impact Chain
“I’m reviewing sessions for a study on [topic] and noticed these friction points:
[paste friction descriptions, timestamps, or rough notes].
Help me break these down:
1. Categorize the friction
Classify each friction point as cognitive, emotional, navigational, trust-related, value-related, expectation mismatch, or effort-related.
2. Identify root causes
For each friction point, suggest what might be causing it based on the patterns so far.
3. Map user impact
For every friction point, describe how it might alter user behavior, motivation, trust, or progress.
4. Identify product impact
Which outcomes, flows, or metrics might shift due to these friction points?
5. Suggest follow-up investigation
Provide questions, probes, or tasks to explore these friction points in the next sessions.”
Follow-ups
Summarize the top friction points in one slide for stakeholders
Identify frictions with the highest impact on conversion or retention
Prompt 4: The Mid-Flight Course Correction Prompt
When to use it
When something has shifted like the prototype changed mid-study, the team added new questions, user behavior is surprising, or external context changed.
Prompt: Adjust the Plan + Protect Quality + Reset Expectations
“We’re in the middle of a study on [topic].
This is what has changed:
[describe change].
This is why I’m concerned:
[describe concern].
Help me adjust our approach:
1. Identify how this change affects the study
Which parts of the plan are now at risk?
Where could insight quality drop?
2. Recommend adjustments
Propose concrete changes to the plan that preserve insight quality.
3. Identify what to deprioritize
Which parts of the original plan should be paused or removed?
4. Recommend next-session focus
What should I probe more deeply, add, or clarify in upcoming sessions?
5. Draft a stakeholder update
Write a concise update explaining the shift, the adjusted plan, and how we’re protecting the quality of the findings.”
Follow-ups
Rewrite the update in a more direct, executive-ready tone
Highlight risks of not adjusting the plan
Suggest how to communicate these changes without sounding defensive
Synthesis With AI
Let me make one thing super clear in terms of how I overlap UXR synthesis with AI: AI is not a synthesizer. It’s not a researcher. It doesn’t understand users, motivations, emotion, context, or trade-offs, or when a quote contradicts a pattern.
It doesn’t know when a participant is hedging, bluffing, guessing, or telling you what they think you want to hear. And it absolutely does not have the judgment required to turn human behavior into decisions.
There’s a reason senior UXRs get twitchy any time someone suggests “letting AI handle synthesis.” You can’t outsource interpretation, meaning-making, the thing your career is built on.
And please be cautious putting customer data into an LLM, especially a public-facing one. Even redacted text can be risky if the structure reveals anything contextual.
You can use AI as a thinking partner, but only with thought-safe content. You keep the real data on your machine, inside your company’s approved systems. AI never gets the raw material, so this section is not about “letting AI synthesize.”
This section is about using AI to support you while you do the actual synthesis:
spotting logic gaps
naming patterns you already see
checking for bias
identifying missing evidence
pressure-testing early insights
reframing your own wording
sharpening what you already know
Prompt 1: The Insight Clarity Prompt
When to use it
Every researcher has lived the moment where a finding feels almost right but a bit muddy. Use this right after you’ve drafted early insight statements based on your own notes, clip groups, or frameworks. This prompt helps you sharpen your own wording without giving AI any sensitive content. Feed it your summaries, not your data.
Prompt: Clarity, Sharpness, Bias Scan (No Raw Data)
“I’m drafting early insight statements for a study on [topic].
Here are my summaries of what I’ve observed across sessions so far:
[paste your own high-level summary bullets — no quotes, no identifiers].
Help me sharpen these:
1. Clarity check
Rewrite each insight so the action, behavior, and meaning are clearer.
2. Ambiguity scan
Point out where my summaries feel vague, hedged, or overly general.
3. Bias check
Flag any phrasing that suggests interpretation without clear support.
4. Strength assessment
For each insight, suggest what kind of additional evidence would make it stronger.
5. One-slide version
Create a crisp, executive-ready version of these insights.”
Follow-ups
Rewrite these insight statements for a PM who wants clear decision paths
Create a design-facing version focused on friction and behavior
Reduce each insight to a single, pointed sentence
Prompt 2: The Pattern Validation Prompt
When to use it
Use this when you’ve spotted a pattern but you’re not totally confident it holds across all participants or you need to understand whether your interpretation is leaning too far in one direction. Give AI your interpretation rather than raw data.
Prompt: Pattern Stress-Test
“I’m exploring early patterns from a study on [topic].
Here are my interpreted patterns so far (summarized in my own words):
[paste synthesized patterns only].
Help me pressure-test these patterns:
1. Alternative interpretations
Suggest 3-5 different ways these patterns could be interpreted based on neutral logic.
2. Strength score
Evaluate which patterns feel strong, medium, or weak based on how well they’re framed.
3. Evidence gaps
List what additional information I would need to confirm or challenge each pattern.
4. Rival hypotheses
Provide rival explanations that could compete with my interpretations.
5. Researcher blind spots
Suggest where I might be jumping ahead or simplifying too much.”
Follow-ups
Rewrite these patterns using decision-focused language
Suggest questions or probes for follow-up sessions to strengthen the patterns
Prompt 3: The Decision-Ready Framing Prompt
When to use it
Use this once you already know your insight is valid, you’re not asking AI to synthesize, you’re asking it to help you frame the insight for decision-makers. Most insights get ignored not because they’re wrong, but because they’re framed weakly.
Prompt: Insight → Decision → Recommendation Framing
“I’ve formed a validated insight for a study on [topic].
Here is my high-level summary of the insight (no raw data):
[paste your polished summary].
Help me frame this insight for decision-makers:
1. Headline insight
Rewrite the insight as a sharp, decision-focused headline.
2. ‘Why this matters’
Articulate the business or product impact tied to this insight.
3. Recommendation
Suggest a clear recommendation tied to a product decision.
4. Risk of inaction
Describe what might happen if the team ignores this insight.
5. Team-specific framing
Provide versions of this insight for:
product leadership
PMs
designers
engineering
marketing”
Follow-ups
Condense into 5 bullets for an exec email
Create a storyboard-style version with action steps
Reframe using more assertive language
Prompt 4: The Bias + Blind Spot Detector Prompt
When to use it
Use this when you want a second set of eyes on your own thinking. Every researcher has pet theories, favorite directions, or subconscious leanings. This prompt makes those tendencies visible without compromising user privacy.
Be transparent about your own assumptions. AI can only help you examine what you expose.
Prompt: Bias Scan + Cognitive Guardrails
“I’m synthesizing insights for a study on [topic].
Here are my own working assumptions and interpretations in summary form:
[paste your thinking, not your data].
Help me identify where my thinking might be drifting:
1. Researcher bias scan
Where might confirmation bias, pattern bias, recency bias, or personal preference be shaping my interpretation?
2. Competing viewpoints
Offer counter-arguments or alternative explanations for each interpretation.
3. Missing angles
Point out perspectives I haven’t considered — product, business, technical, emotional, contextual.
4. Evidence check
For each interpretation, suggest what type of validation I should look for in my own notes.
5. Guardrails
Suggest a set of checks I can use to protect rigor as I finalize synthesis.”
Follow-ups
Turn these guardrails into a 5-item checklist
Rewrite my interpretations using more neutral language
Highlight interpretations that feel overly certain
How to Use AI as a Strategic Writing Partner
You ran the sessions. You lived through the awkward pauses, the prototype glitches, the user who whispered every response like a state secret, the one who said “I love it!” while clearly trying to escape the call, and the one who misclicked six times and nearly made you cry.
Now you need to turn all that chaos into writing that actually moves the product forward.
This is where many UXRs get stuck because translating synthesis into actionable, decision-ready, team-specific communication is an entirely separate skill.
These are the four prompts researchers use the most to turn insight → action → influence.
Prompt 1: The Executive Insight Translation Prompt
When to use it
After you know what the insight means, but before you put it in front of leadership. Perfect for those “one-slide summary” moments because executives don’t think in findings but in decisions, risk, timelines, and impact. This prompt forces the insight to speak their language.
Feed AI a clean, concise summary of the insight.
Prompt: Insight → Executive Decision Summary
“I’ve synthesized an insight from a study on [topic].
Here is my high-level summary in my own words:
[paste your polished insight summary].
Help me translate this into an executive-ready insight:
1. Headline
Rewrite the insight in a way that communicates the decision impact immediately.
2. Why it matters
Describe the business or product risk tied to this insight.
3. Recommendation
Draft a direct recommendation tied to a decision or next step.
4. Cost of ignoring
Describe one clear downside of not acting on this insight.
5. One-slide version
Create a version of this insight that fits neatly into one slide for an exec brief.”
Follow-ups
Rewrite the slide version for a 30-second verbal update
Create a lower-stakes version for async Slack communication
Highlight the metric most affected by this insight
Prompt 2: The Multi-Audience Reframing Prompt
When to use it
Use this when:
The PM wants something different than the designer.
Engineering needs a technical angle.
Leadership wants the business story.
You need the same insight framed in multiple languages.
Prompt: One Insight, Four Versions
“I have an insight from a study on [topic].
Here is my own summarized version:
[paste summary].
Help me frame this insight for four audiences:
1. Product leadership
Focus on business impact, risk, and resourcing decisions.
2. PMs
Focus on behavior, motivations, and roadmap clarity.
3. Designers
Focus on friction, usability patterns, and user expectations.
4. Engineering
Focus on feasibility signals, failure points, and potential system impacts.
For each audience, write a 4-5 bullet version of this insight.”
Follow-ups
Write a direct version of this insight appropriate for a roadmap deck
Highlight which version is most suited for influencing near-term priorities
Prompt 3: The Recommendation Builder Prompt
When to use it
Once your insight is clear but your recommendation still feels too soft, too vague, or too polite. A well-formed recommendation can move a team while a weak one gets nodded at and forgotten.
Prompt: Insight → Clear Recommendation + Path Forward
“I have an insight from a study on [topic].
Here is the distilled meaning in my own words:
[paste insight summary].
Here is my draft recommendation:
[paste your attempt].
Help me strengthen this recommendation:
1. Rewrite for clarity
Make the recommendation direct, specific, and actionable.
2. Define the behavior change
What will change for users or the product if the team follows this recommendation?
3. Define the decision
State the decision that needs to be made to support this recommendation.
4. Outline next steps
List 3-5 practical next steps the team can take starting Monday.
5. Identify risks and blockers
Flag any risks, trade-offs, or dependencies tied to this recommendation.”
Follow-ups
Rewrite the recommendation for a PM who prefers confident, brief instructions
Rewrite it with a stronger sense of urgency
Draft a Slack message that communicates the recommendation without sounding confrontational
Prompt 4: The ‘So What?’ Sharpening Prompt
When to use it
When your insight feels solid, but the story doesn’t yet land. This is perfect for the moment when you think: “I know what this means, but it’s not quite hitting.”
Prompt: Strengthen the Meaning + Make It Action-Driving
“I have an insight from a study on [topic].
Here is what I believe it means for the product:
[paste your interpretation].
Help me sharpen the ‘so what’:
1. Clarify the core message
Distill the meaning into one pointed sentence.
2. Highlight the downstream impact
Explain how this insight shapes user behavior, conversion, trust, or retention.
3. Tie it to a business question
Describe which business outcome or goal this insight informs.
4. Suggest a shift
What shift in strategy, design, messaging, or prioritization does this insight suggest?
5. Identify the next critical question
What question should the team ask next based on this insight?”
Follow-ups
Rewrite the ‘so what’ for a team that tends to ignore nuance
Turn this into a sentence I can use in a readout headline
Frame it in a way that supports prioritization discussions
How to Build Prompts That Don’t Fall Apart
Once you start using AI more seriously, you begin to realize you’re no longer writing “prompts",” but writing systems. Thinking structures you can adapt across projects, teams, and stages of the research cycle.
This is where AI becomes a consistent partner instead of an occasional tool, not by giving it more instructions, but by giving it the right shape of instruction.
This section is for building your own prompt library, internal method, and strategic toolkit.
Prompt 1: The Custom Prompt Builder
When to use it
Use this any time you’re about to do something new, complex, or high-stakes and you need the right structure before you start prompting. This is ideal for:
writing research plans
reframing stakeholder requests
designing surveys or tests
scoping discovery work
drafting proposals
prepping for workshops
giving feedback to designers or PMs
Prompt: Build Me a Custom Prompt Template
“I’m working on [describe activity, e.g., ‘a research kickoff plan,’ ‘a usability test plan,’ ‘a discovery study brief’].
I need AI to support me as a thought partner.
Help me build a custom prompt template I can reuse for this activity.
1. Identify the expert persona
Recommend 2-3 expert roles AI should act like for this task and explain why.
2. Create structured sub-questions
Draft 3-5 sub-questions the AI should always ask or answer within this kind of task.
3. Define output formatting
Suggest the clearest output structure (bullets, table, layered summary, short paragraphs).
4. Tone options
Give me 2 tone profiles depending on the audience (exec, PM, design team).
5. Assemble the template
Put everything together into a single, coherent prompt template I can reuse.”
Follow-ups
Shorten the template for quick Slack use
Create a version tailored for high-stakes executive communication
Suggest a troubleshooting section for when the output is weak
Prompt 2: The Prompt Troubleshooting Prompt
When to use it
Whenever the AI gives you an output that feels:
vague
generic
repetitive
missing context
missing nuance
too polite
too long
too short
too “blog-posty”
or just… wrong
Paste both your original prompt and the disappointing output.
Prompt: Diagnose the Weakness + Rewrite a Strong Version
“Here’s the prompt I used:
[paste original prompt]
Here’s the output I got:
[paste output — ensure it contains no user data].
Help me fix this:
1. Diagnose the problem
Identify what was missing from my prompt.
List the gaps in context, framing, role-setting, and structure.
2. Rewrite the prompt
Write a stronger version using more context, clearer framing, and better structure.
3. Suggest improvements
Give me 3-5 ways I could iterate further if the next output still feels weak.
4. Output checklist
Create a short checklist I can use to evaluate future prompts before sending them.”
Follow-ups
Write a shorter version of the strengthened prompt for quick use
Show me how the new prompt aligns with the FAST model
Rewrite the prompt in a more assertive tone
Prompt 3: The Reusable Structure Extractor
When to use it
Use this when you’ve written something good, a clean research plan, a strong summary, a beautifully structured message, and you want to turn that structure into a reusable template.
Prompt: Extract the Structure So I Can Reuse It
“I wrote something I want to turn into a reusable template.
Here is the structure or outline of what I wrote (summaries only, no sensitive information):
[paste your structure or high-level summary].
Help me extract the underlying framework:
1. Identify the structural components
List the core sections or steps in the pattern.
2. Identify the logic flow
Explain how the sections relate to one another.
3. Create a generalized version
Rewrite this as a reusable template anyone could apply.
4. Add prompts
Suggest questions or cues someone should consider when using this template.
5. Output the final version
Assemble everything into a clean, reusable format.”
Follow-ups
Create a compact version suitable for a workshop handout
Turn this into a Notion template
Add a checklist for quality control
Prompt 4: The AI-as-Second-Brain Prompt
When to use it
Use this when your head is full, like when you’re juggling constraints, insights, risks, politics, timelines and you need the AI to help you organize and structure your thinking. It’s great for:
refining an argument
preparing for a stakeholder conversation
planning a workshop
reviewing your own reasoning
shaping a narrative for a readout
Prompt: Organize My Thinking + Reveal Blind Spots
“I’m working through a complex research or strategy problem.
Here are my high-level notes and thoughts (summarized and safe):
[paste notes].
Help me organize and strengthen my thinking:
1. Identify themes
Group my thoughts into themes or categories.
2. Reveal gaps
Point out where my reasoning feels incomplete or inconsistent.
3. Surface blind spots
Highlight angles, perspectives, or risks I haven’t considered.
4. Strengthen the logic
Rewrite my thought structure in a more coherent, strategic flow.
5. Produce a clear narrative
Draft a concise narrative I can use for stakeholder communication.”
Follow-ups
Rewrite this narrative for an executive audience
Suggest where political tension might appear
Create a version suitable for a readout slide
Guardrails
By this point in the workflow, the excitement of what AI can do often starts bumping against the reality of what it shouldn’t do. A lot of us researchers know this feeling well: the moment when a tool seems capable of producing a faster version of something, but your gut tells you the shortcut risks breaking the thing you’re trying to improve.
Synthesis and decision-making sit right at that tension point. Research is built on context, judgment, nuance, trade-offs, ethics, and responsibility. AI has none of those things. It doesn’t understand people, pressure, risk, harm, incentives, or messy real-world motivations. It knows patterns, not consequences.
This section is about protecting yourself, your users, and your work.
The goal is to help you use it with the right boundaries so you never end up outsourcing something that requires your judgment. Here are the four guardrails that I always put into place when using AI.
1. AI should never touch raw user data (with one exception)
Research data is sensitive by design. It contains real moments from real people like workplace details they didn’t plan to share, emotional reactions they didn’t rehearse, and traces of identity woven into how they speak, navigate, hesitate, or describe their lives. That includes transcripts, recordings, chat logs, screenshots, open-text survey responses, and anything that reflects someone’s personal or professional context.
Most LLMs aren’t built to carry that responsibility.
Unless your company is using a private, isolated, enterprise-grade LLM that has been vetted, contractually approved, and explicitly cleared for sensitive data, nothing identifiable should go anywhere near an AI tool. The bar for “approved” should be high, not assumed. And even when your company does have a private model, we still need to make participants aware if their data might be processed by AI.
If you wouldn’t record them without telling them, you shouldn’t feed their words into an AI system without telling them either. Participants deserve to understand where their data may go and how it may be used downstream.
In practice, this means:
if you’re using a public AI tool, never paste raw data
if you’re using a private company-owned LLM, check whether your participant agreement covers AI-based processing
if it doesn’t, update it or get explicit permission
if you’re unsure, treat it as a no
The safest everyday approach is to use your own rewritten summaries, not the user’s words. If you want a built-in guardrail in your workflow, you can use something like:
Rewrite this prompt template so it forces me to summarize user data myself and prevents me from pasting raw participant quotes or sensitive content unless I’m working inside a private, company-approved LLM and have participant permission.
This keeps you anchored, even on the days when you’re juggling deadlines, shifting prototypes, and a PM who thinks “just quickly throw this into ChatGPT” is harmless.
2. AI can support thinking, but it cannot decide
Even the best models struggle with ambiguity. They default to closure, pattern completion, and confident-sounding conclusions. None of that belongs in synthesis.
A model can help you challenge a thought, pressure-test an early interpretation, or reframe an insight. It cannot determine what a finding means. Meaning-making requires context, the kind you accumulate across years like noticing discomfort in a user’s voice, catching a contradiction between behavior and words, spotting tension in a stakeholder’s reaction, or recognizing when a problem is political rather than product-driven.
AI’s role here is advisory, not authoritative. It can help with:
exploring alternate interpretations
spotting vague or hedged wording
identifying weak logic
checking where your confidence might be too high
sharpening a narrative
But it cannot create a new insight from scratch, and it cannot weigh evidence the way you can. A good reinforcement prompt:
Given my summarized interpretation below, help me explore alternative explanations without generating new conclusions or treating the insight as fact.
3. AI cannot outrun the evidence
One of the biggest risks in this stage is accidental overconfidence. You’ve run a few sessions, early signals are emerging, and the team starts pushing for direction. That’s usually when someone asks, so what are we seeing? and the temptation to give a polished answer creeps in.
AI amplifies that temptation. If you give it half a pattern, it will happily complete it. Not because the data supports it, but because that’s what language models do, they fill gaps. That’s not synthesis, that’s prediction. AI needs to work inside your evidence boundaries, not outside them. It should check your confidence, not inflate it.
A simple guardrail you can build into your workflow is:
I’m sharing early interpretations from a partially completed study. Do not expand beyond what I summarize. Show me where my confidence is too high, where evidence is thin, and what I should validate before moving on.
This keeps the model from running ahead of your insight.
4. AI cannot replace sense-making in complex or high-risk domains
There are areas where interpretation comes with real consequences. Products involving finances, legal processes, medical information, safety, compliance, or high-trust interactions require tighter rigor and deeper cross-functional collaboration. In these spaces, a single misinterpretation can create harm, erode trust, or expose the team to liability.
AI lacks the domain expertise, historical context, and ethical grounding to guide decisions in these environments. It can help you structure your reasoning or surface questions you haven’t considered, but it should never be trusted with directive recommendations in these contexts.
A guardrail prompt here is helpful:
I’m working in a sensitive or high-risk domain. Use my summarized interpretation to highlight missing constraints, ethical concerns, and areas where I should involve a domain expert. Do not propose product changes or interpret user evidence.
This keeps the model focused on supporting your thinking, not steering decisions.
AI Won’t Replace Our Craft
AI can make pieces of research move faster, but speed was never the heart of this job. The real work sits in the judgment calls you make every day like spotting the tension behind a stakeholder request, sensing when a prototype is steering users in the wrong direction, catching the moment when a participant’s body language contradicts their spoken answer, or spotting a pattern in the way people hesitate before clicking a button. None of that can be outsourced.
What AI can do is clear some of the mental load that steals your time without adding any real value. It can help you shape a messy request into a focused brief. It can help you sanity-check a hunch before you bring it into a team discussion. It can walk you through alternative explanations so you don’t anchor too hard, too early. It can help you sharpen your writing so your insights land with more force. It can free up the hours you lose rewriting the same explanations for the tenth time.
There’s a quiet shift that happens once you stop treating AI like a threat and start treating it like an extra brain you can lean on for structure, patterning, and reframing. The job gets lighter. You stop drowning in the small things. You start showing up in the parts of the work that actually move decisions. Your time goes where it matters: shaping outcomes, not rewriting documentation.
It reminds me of a moment early in my career, sitting in a conference room at 8pm, surrounded by sticky notes that had stopped making sense hours earlier. I remember wishing someone could sit next to me, not to do the work for me, but to ask the questions I was too tired to ask myself. That’s what AI gives you at its best, a steadier way of thinking when your brain is stretched thin.
None of this replaces your craft.
It creates room for your craft.
Use AI for structure, to test your reasoning, to sharpen your framing and your storytelling, but keep the meaning-making in your hands. That’s the part only a researcher can do and the part that shapes the decisions people feel six months from now, not just the slides they see next week.
If you hold that boundary, AI stops feeling like a risk and starts feeling like the thing that lets you work closer to the level you’ve always wanted to operate at.
I built an AI Prompt Library for user researchers who are tired of wasting time on useless outputs. These are the exact prompts I use when I want to:
Pressure test my research questions
Catch blind spots before they derail a study
Frame insights so they actually land with leadership
Prep for tough stakeholder conversations
Kickstart deeper thinking when I’m stuck
It’s a working toolkit I’ve refined in real research projects with teams under pressure to deliver fast, smart, credible work.
If you want to stop messing around with AI and actually use it to get better at research, you can purchase the library below (starting at £297 for over 60 detailed prompts):


