Defining AI fluency in user research roles
And how companies are starting to test you on it
I’ll be joining Outset on May 5th at 6pm GMT for a webinar on a problem I think more teams need to take seriously, that poorly planned research doesn’t just give you bad data, it gives you confidence in bad data. We’ll talk about where bad inputs enter a study, how to catch them before they compound, and how to build better foundations for research that tells you the truth and gives you quality. We’ll be reframing GIGO (garbage in, garbage out) to QIQO (quality in, quality out), and how we can use these guardrails to effectively train stakeholders and safely democratize research.
A few months ago, I asked a room full of researchers a simple question:
“Who here is using AI in their research practice?”
Every single hand went up.
I wasn’t surprised. AI is everywhere right now, and researchers are nothing if not curious, so I followed up with, “What are you using it for?” and got the following:
Cleaning up transcripts
Writing first drafts of screeners
Summarising my notes before synthesis
Tidying up my reports
Useful stuff, genuinely, but also not what I’d call AI fluency.
Almost every researcher in that room thought they were using AI well. And in a surface-level sense, they were. But when I started asking follow-up questions—how do you verify the output? what’s your approach when it gets it wrong? what’s your policy on participant data? have you disclosed to stakeholders how AI was involved?—the answers got much quieter.
Nobody had a actual clear framework. But, hey, it’s not surprising. Nobody has ever actually defined what AI fluency looks like for us specifically.
This is now showing up in your job description
Let me tell you what prompted me to actually write this down.
Companies are creating frameworks for assessing AI fluency in every single hire, across every function. They’re not just asking “do you use AI?” They’re assessing mindset, strategy, building, and accountability. They’ve raising the minimum bar so that candidates who can’t demonstrate AI embedded in their core work don’t make it through.
PMs are being assessed on it. Engineers. Designers. Customer success. Marketing. Finance.
The frameworks they’re building for each of these roles are specific. The PM version talks about writing SQL, building dashboards, generating working prototypes. The engineering version talks about shipping AI-powered features. The marketing version talks about building automated content pipelines.
Nobody has built the researcher version. So researchers are either being assessed against generic criteria that doesn’t account for what our work actually involves, or worse, they’re walking into AI fluency conversations with nothing concrete to say.
If you’ve been in a job interview recently and someone has asked you “how do you use AI in your research practice?”, you’ve felt this. The question lands and you have a few seconds to figure out whether “I use it to clean up my transcripts” is going to cut it.
It isn’t. Not anymore.
AI fluency is no longer a nice-to-have that ambitious researchers can choose to develop. It’s becoming a baseline expectation and companies are increasingly building tests to measure it.
So let’s build the framework for what it actually looks like for us.
Why research is different (and why the stakes are higher)
Before I get into the levels, I want to make one argument clearly, because I think it’s the most important thing in this entire piece. AI fluency for researchers is not the same as AI fluency for any other role and the reason is accountability.
If a researcher uses AI badly, they risk presenting fabricated or distorted insights to a product team who will then use those insights to make real decisions. That’s a pretty big mistake and it can cascade in ways that are genuinely hard to trace back to the source.
I’ve seen it happen. A researcher uses AI to synthesise interview transcripts, takes the themes at face value, builds a report around them. The themes look right, they’re plausibly written, but they don’t fully reflect what participants actually said. They reflect what the AI predicted the themes would be, shaped by its training data, which is not the same as your eight participants and their specific context.
This is the GIGO principle playing out in research: garbage in, garbage out. The terrifying version for researchers isn’t that you feed garbage in. It’s that you feed good data in, and the process quietly introduces distortion you can’t see.
That’s why accountability has to be the centerpiece of any AI fluency framework for us. It’s not the most exciting component but it’s the one that separates researchers who are using AI well from researchers who are just using AI fast.
The four levels of UXR AI fluency
I’m going to describe each level in detail, across the full scope of research work. Read through all four before you try to self-assess, because it’s easy to over-identify with a higher level until you see the specifics written out.



