The User Research Strategist

The User Research Strategist

Stop stakeholders from ignoring you

Static user research deliverables are done

How to build dynamic, evidence-linked, competitor-aware research artifacts in Claude Cowork, without being a designer, begging engineering, or burning weeks

Nikki Anderson's avatar
Nikki Anderson
May 07, 2026
∙ Paid

Free webinar with Qualtrics | Thursday May 14, 12 PM EST

Take one hour to learn when synthetic research makes your studies sharper, when it falls apart, and how to fold it into your workflow without sacrificing depth. I'll walk through real use cases and the RISE framework I built for interpreting synthetic results. This will be an honest conversation with someone trying out these tools because we need to get ahead of them, and learn how to use them with intention.

Grab your spot


I gave Claude two screenshots of an old journey map and a garbage prompt. He gave me back something I genuinely wish I’d had ten years ago.

Deliverables suck.

I’m going to start there because I think it’s the most honest thing I can say about the job. I’m a researcher. I do not have a design bone in my body, not even a pinky bone. I am good at words. I am very, very not good at visualisations. I will write you a very clear report. I will not make you a beautiful journey map. And yet somehow, half my job is making beautiful deliverables.

So when a few clients recently started asking for dynamic deliverables, my honest internal reaction was that I don’t even like static deliverables. What am I going to do with dynamic ones?

I sat with that feeling for a few days, getting more dread-y about it every time it came up. And then one evening I opened up Claude Cowork, which I’d been hanging out in for a while and finding genuinely fun, and decided to just play around. No plan, no proper prompts, just throwing things at the wall to see what stuck.

Two screenshots of a journey map I made years ago at a now-defunct travel company. A prompt so bad I am embarrassed to share it and about an hour of messing around.

The thing that came out of that hour was not just “a better journey map.” It was an interactive, three-persona, scrubbable, non-linear timeline showing real backtrack loops, with every pain point linked to a Jira-style ticket and every gap marked as an opportunity. Sarah’s booking odyssey over 12 days, 7 sessions, and 5 backtrack loops. That sentence alone is more interesting to a stakeholder than my entire old journey map.

I made a mess of recording it (Substack Live broke, then Loom broke). Other things broke and I yelled at Claude (who is, in my head, a dude, I don’t know why). I burned through Lovable credits I was saving for actual work.

But what came out of that hour shifted how I think about deliverables. So I went back the next morning, and the morning after that, and ran the same loop again on personas, then service blueprints, then a research repository. I broke it and rebuilt it. I figured out the prompts that actually work and the ones that waste an evening.

The video was the messy proof-of-concept and this post is the version where I tell you exactly what to do, in what order, with which prompts, so you can build dynamic deliverables on a Tuesday night without burning your Lovable credits.

What Claude Cowork actually is

Cowork is Anthropic’s desktop product for working alongside Claude on files and projects rather than in a one-shot chat. The relevant difference for our purposes is that you can attach real artifacts (journey maps, transcripts, analytics exports), keep context across a working session, and have Claude build interactive outputs you can click around in, not just text replies.

If you’ve only used Claude.ai in the chat window, the best mental shift is that chat is for conversations while Cowork is for building things.

The promise of this guide

What I’m about to walk you through works for journey maps, personas, service blueprints, stakeholder maps, research repositories, opportunity solution trees, and most other research artifacts that have historically been static, image-based, and lightly engaged with by stakeholders.

The system has five phases. None of them are skippable. Phase 1 is the one most researchers want to skip and the one that decides whether the output is honest or hallucinated.


If you’ve ever spent three weeks on a journey map only to watch a stakeholder open the PDF, scroll once, and close it, you already know exactly the gap I’m trying to close here. Static deliverables are the format we inherited. Dynamic ones are the format we can now actually build, on our own, without begging a designer.

Below, paid subscribers get the full operating manual I now use for every dynamic deliverable I make:

  • The pre-work checklist (the five questions to answer before you ever open Cowork because most failed deliverables fail here, not at the prompting stage)

  • The reusable prompt template, with variable slots for journey maps, personas, blueprints, stakeholder maps, and research repositories (copy this and you have your first prompt for the next twelve months of work)

  • The four-move iteration loop (Load → Sprawl → Realism → Connect, what each move does, the prompts that trigger it, and the sequence that actually compounds)

  • The competitive intelligence layer (how to pull verified-source comparison data from places like Baymard, Nielsen Norman, and aggregated public reviews using Perplexity or Claude web search, the three-tier source model that keeps your deliverable defensible, and the integration prompt that overlays competitor context without hallucinating)

  • Four full worked examples, end-to-end (a behaviourally-segmented persona, a service blueprint with department ownership, a searchable research repository, and a journey map with a competitive intelligence overlay, each with the data you bring, the prompts you use, what Cowork builds, and the iterations you’ll want to run)

  • The polish-and-rollout playbook (when to stop iterating, how to handle off-brand output, how to demo dynamic deliverables to stakeholders without overwhelming them, and the 30/60/90 rollout that gets a team from “what is this” to “we can’t go back”)

  • The six failure modes I hit and exactly how to fix each one (hallucinated data, sprawl, broken filters, off-brand visuals, stakeholders who don’t know what to click, and hallucinated competitive context)

  • A short note on pricing this work for clients without accidentally torching your day rate (if you’re a consultant

If “make this dynamic” has been the thing you nod along to in client calls and quietly panic about afterwards, this is the guide I wish I’d had three years ago.

Exclusively for paid subscribers

User's avatar

Continue reading this post for free, courtesy of Nikki Anderson.

Or purchase a paid subscription.
© 2026 Nikki Anderson · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture