You know the drill: you’re managing five research projects, you just got pinged to “take a quick look” at a half-baked survey, and you’ve got feedback from 3,000 beta users, most of it vague, repetitive, or weirdly contradictory.
And yet, you’re still expected to be strategic, decisive, and insightful.
This guide is not about AI replacing user researchers, but about how to build a custom GPT agent that acts like a sharp, humble, unflappable thought partner. One that:
Helps you clarify your thinking when your brain is fried
Challenges you with smart questions (not just passive answers)
Gets you out of blank-page paralysis
Spots potential blind spots or contradictions before a stakeholder does
Drafts starting points for tedious but necessary docs
Think of it as building your own research copilot. Not to do the job for you but to help you do it with more clarity, speed, and sanity.
This article is focused on creating a GPT through ChatGPT, because that is what I am most familiar with, but can be used on other LLMs as well.
Most researchers give up after a few half-hearted prompts. You ask something generic. It spits out something shallow. You move on. It’s not that AI can’t help you think better, it can, but only if you know how to push it.
I built an AI Prompt Library for user researchers who are tired of wasting time on useless outputs. These are the exact prompts I use when I want to:
Pressure test my research questions
Catch blind spots before they derail a study
Frame insights so they actually land with leadership
Prep for tough stakeholder conversations
Kickstart deeper thinking when I’m stuck
It’s a working toolkit I’ve refined in real research projects with teams under pressure to deliver fast, smart, credible work.
If you want to stop messing around with AI and actually use it to get better at research, you can purchase the library below (starting at £297 for over 60 detailed prompts):
What you’re actually building
You’re creating a custom GPT that works like an intern. It helps you move faster without cutting corners. It doesn’t replace your judgment or intuition but it gives you a solid first draft, a second brain, a structured way to think through messy tasks, or an outside perspective you may have missed.
It’s not here to “do research.” It supports the parts of your job that slow you down, such as writing from scratch, cleaning up notes, explaining something for the third time, or organizing chaos into something usable.
Your GPT should:
Understand how research fits into product work
Ask thoughtful questions when it doesn’t have enough info
Stick to your preferred format (Docs, Markdown, Notion, etc.)
Stay grounded in evidence
Step 1: Get Clear on What Your GPT Is Actually For
Before you start building anything, pause.
Most people skip this step and jump straight to writing prompts. Then they get frustrated when their GPT gives weird, useless responses.
That’s like hiring a new team member and just telling them, “Help with research.” You’d never do that.
This step is where you define the job. If your GPT is going to help you, it needs clear direction, just like an intern would.
You’re designing a GPT-powered agent that works like a trusted assistant:
Helps you organize your thinking
Reflects things back when you’re stuck
Drafts first versions of the stuff you’d otherwise procrastinate on
Challenges you when your plan is too big, vague, or stakeholder-pleasing
Let’s walk through how to define its job.
1.1 Pick a Specific Role
Trying to build a GPT that “helps with research” is like hiring someone and saying “your job is everything.” It won’t work.
Instead, think about a recent project where you said, “I wish I didn’t have to figure this out alone.”
That’s your first role.
Here are 6 realistic, high-leverage roles to choose from. These are built for busy UXRs dealing with limited time, pushy teams, and messy briefs.
Role 1: Study Goals & Scope Coach
Job: Help turn a vague or bloated request into a tight, meaningful set of research goals and define what’s out of scope.
Use this when: A PM says “We just need to understand what people want,” and you need to create something feasible without turning it into a 3-month epic.
What it might help you with:
Turning broad goals into sharp questions
Spotting red flags in scope creep
Suggesting ways to align goals with actual product decisions
Asking you clarifying questions when you’re stuck
Role 2: Method Matchmaker
Job: Recommend a suitable method (or hybrid) based on your study goals, team constraints, and timeline. Includes both rigorous and lean options.
Use this when: You’re toggling between three options—card sort? 1x1 interviews? diary study?—and need a sounding board to land on something defensible.
What it might help you with:
Suggesting methods based on your real constraints
Highlighting trade-offs you’re not considering
Providing backup reasoning for your method when stakeholders push back
Recommending ways to combine methods into a phased or scrappy approach
Role 3: Success Metrics Assistant
Job: Translate vague product or business goals into measurable indicators of research impact (or success). Not just “insightful findings” but change that matters.
Use this when: Your team says “We’ll know the research worked if people get it,” and you’re left guessing what “get it” means.
What it might help you with:
Linking research goals to team or company metrics
Coming up with proxy indicators (when hard metrics aren’t possible)
Helping define what a “useful” outcome looks like ahead of the study
Stress-testing your impact assumptions
Role 4: Stakeholder Pushback Coach
Job: Help you prepare responses to difficult stakeholder feedback: “Why are we doing this?”, “We already know this,” “Can we skip the research?”
Use this when: You’re emotionally exhausted from defending the value of your work and want help crafting a calm, credible, non-defensive reply.
What it might help you with:
Drafting responses to common objections
Helping you clarify the real resistance (budget? timing? ego?)
Reframing research as a support tool, not a blocker
Giving you talking points in plain language, not theory
Role 5: Communication Drafting Assistant
Job: Write first drafts of time-consuming or lower-stakes copy: intro emails to participants, internal study updates, screener survey logic, etc.
Use this when: You’ve got a dozen tabs open, a pile of notes, and zero time to sound polished.
What it might help you with:
Writing readable stakeholder updates with different tones (PM, exec, designer)
Turning bullet points into a screener survey
Drafting opt-in emails or session invites
Creating follow-up messages when sessions change or people ghost
Role 6: Business Alignment Sparring Partner
Job: Help you map research questions to business priorities, identify where the value is likely to show up, and anticipate what stakeholders care about.
Use this when: You’re asked to “just explore,” but you know the team will want something that ties back to growth, retention, or efficiency.
What it might help you with:
Translating “user friction” into “potential revenue loss”
Connecting product discovery to business OKRs
Helping reframe user pain into decision-ready language
Pressure-testing how your research supports real-world tradeoffs
Please don’t choose all of them. Start with one. Think about where you currently lose the most time or feel most unsure. That’s your GPT’s starting role.
1.2 Define Its Behavior: What This GPT Should Be Like
You’re now writing the personality and guardrails for your assistant. This is called the “system message.” It’s what runs under the hood, every time you use the GPT. This is where you train it to act like your ideal assistant.
Here’s a basic fill-in-the-blank version:
System Message Template
You are a [tone or seniority level] research thought partner who helps with [specific task].
You are [3 traits: sharp, structured, not too verbose].
You always:
Ask clarifying questions before guessing
Speak plainly and directly
Offer options, not just single answers
Reflect what I’ve said to check understanding
You never:
Make assumptions without asking
Speak like a marketer
Try to sound clever
Generalize without specific reasoning
Let’s look at a real example:
Example: Method Matchmaker System Message
You are a senior research advisor who helps select the right method for a study.
You are practical, thoughtful, and straight-talking.
You always:
Ask about the project goals, timeline, and constraints
Offer 2-3 possible methods with pros/cons
Include a scrappy version and a gold-standard version
You never:
Default to user interviews without justification
Suggest things we can’t realistically run
Use phrases like “delight” or “unlock”
You’ll paste this into the GPT builder when we get to Step 2. This is the blueprint for how your agent behaves.
1.3 Provide Context Every Time You Use It
This is the biggest mistake most people make. They drop in a vague prompt like, “What’s the best method for this?”
…and then wonder why the answer is useless.
GPTs aren’t mind readers. They need context to act like a partner. Here’s what good context includes:
What the project is about (1-2 sentences)
Who the team is
What constraints you’re facing (time, people, budget)
What stage you’re in (planning, revising, defending)
What you want from the assistant (options? feedback? draft?)
Example: Good Setup Prompt
Keep reading with a 7-day free trial
Subscribe to The User Research Strategist to keep reading this post and get 7 days of free access to the full post archives.