Free bonus article: Estimating potential improvements on product metrics
A guide to estimating product improvements from your insights
👋🏻 Hi, this is Nikki with a free, bonus article from the User Research Strategist. I share content that helps you move toward a more strategic role as a researcher, measuring your ROI, and delivering impactful insights that move business decisions.
If you want to see everything I post, subscribe below!
Hi there, curious human!
I’ve received quite a lot of questions on the topic of estimating potential product improvements based on insights. I talk a lot about how important it is to report insights in terms of outcomes like:
Fixing the navigation could reduce drop-off rates by 20%, which would boost conversions by 15%
We need to redesign the onboarding flow to reduce friction, starting with a simpler first step. This will lead to a 10% increase in user retention
Seems simple? Just put some interesting numbers in there and huzzah, we have buy-in. But, honestly, I didn’t start my career doing this and it took me years to understand how to put numbers to my findings and insights.
However, when I started doing this, the change was transformative. People paid attention, listened, didn’t zone out during reports, and even prioritized my work. Stakeholders need to understand the value of your insights. It’s not enough to report findings; you need to show how implementing your recommendations will drive measurable outcomes.
By learning to estimate improvements, you become not just a researcher but a strategic partner in product development.
But how do we really do that? Especially if we are more geared toward words rather than numbers. Here is my step-by-step process for estimating improvements and relating numbers back to my research.
Step 1: Define the target metric
The first step is to identify the specific product metric your recommendation will impact. This ensures your research aligns with measurable outcomes.
How to define the metric:
Understand business goals: Meet with stakeholders to identify the metrics they prioritize. Common examples include:
Conversion rates: Percentage of users who complete a desired action (ex: sign-ups, purchases).
Retention rates: The proportion of users returning over a defined period.
Drop-off rates: Percentage of users abandoning a flow (ex: onboarding).
Engagement metrics: Time spent, number of sessions, or feature usage.
Align metrics with insights: Choose a metric directly related to the problem uncovered in your research. For example, if users abandon onboarding, focus on the onboarding completion rate.
Document the target metric: Write down the specific metric so you stay aligned during the estimation process. For example:
Finding: Users drop off at Step 3 of onboarding due to a confusing form
Metric: Onboarding completion rate
Step 2: Gather baseline data
To estimate an improvement, you need a clear understanding of the current state of the metric—this is your baseline.
How to collect baseline data:
Access analytics: Use tools like Google Analytics, Mixpanel, or Amplitude to gather current metrics. If you don’t have access, collaborate with your product or data teams and ask them how they measure analytics.
Record key data points:
Current value: What is the current metric value? For example, a 60% onboarding completion rate.
Affected population: How many users does this metric represent? For example, 10,000 users go through onboarding monthly.
Segments: Are there specific groups (mobile users, international users) with different behaviors?
Visualize the data: Create a chart or table summarizing baseline data for easy reference.
Step 3: Craft a hypothesis
A hypothesis links your research insights to a measurable improvement. It includes three elements:
The problem: What is causing the issue?
The solution: What are you proposing to solve it?
The expected outcome: What measurable improvement will result?
How to write a hypothesis:
Identify the problem
Use your research findings to articulate the issue:
What’s happening? Users drop off at Step 3 of onboarding.
Why is it happening? The form has too many fields, causing friction.
Propose a solution
Define a specific, actionable recommendation tied to the problem. For example: Reduce the form fields from 10 to 5 to simplify onboarding.
Estimate the improvement
This is the most challenging part for many researchers. Follow these steps:
Use past data: If similar changes were made previously, analyze their impact. For example: Simplifying a checkout process in the past improved conversion rates by 8%.
Leverage industry benchmarks: Research typical improvements for similar changes. For example: UX benchmarks show that reducing form fields can improve completion rates by 5-15%. You can use the following places to research this information:
Analyze the room for improvement: Look at the size of the problem. If 40% of users drop off at Step 3, aim to reduce drop-offs by 10-20%.
Be conservative: Use a cautious estimate for initial predictions (ex: 5-10%).
Example hypothesis:
Reducing form fields will reduce drop-offs by 10-15%, increasing onboarding completion rates from 60% to 65-70%
Step 4: Model the potential impact
Once you have a hypothesis, calculate the impact of your proposed change on the metric. This involves applying the improvement percentage to the baseline data.
How to calculate the impact:
Identify the affected population.
Determine the number of users impacted by the metric:
10,000 users go through onboarding monthly
Apply the improvement percentage
Use the formula:
Improved Metric = Baseline Metric * (1 + Improvement Percentage)
Example:
Baseline completion rate = 60%
Improvement estimate = 10%
60% * (1 + 0.10) = 66%
Quantify the results
Calculate the number of additional users completing the flow:
Additional Users = Total Users * Improvement Percentage
Example:
10,000 users × 10% = 1,000 additional completions.
Translate into business value
If possible, calculate the financial or business impact. If each completion generates $20 in revenue:
1,000 * 20 = $20,000 additional monthly revenue
Step 5: Evaluate your hypothesis
Evaluating your estimate ensures your recommendation is grounded in reality. Use these methods to refine your predictions. You can do this through:
Usability testing: Test your proposed changes with a small group of users in a controlled environment to observe how they interact with the new design or feature.
Create a prototype or mockup of the proposed solution.
Ask users to complete tasks using the new design and measure success rates, time on task, or satisfaction.
Compare results to the baseline behavior observed with the current design.
Wizard of Oz testing: Simulate the new experience without building the full solution. The user interacts with what appears to be a functional system, but parts are manually operated behind the scenes.
Set up a partially functional prototype where manual effort substitutes for backend functionality.
Observe how users engage with the simulated change and gather feedback on their behavior and satisfaction.
Split funnel analysis: Instead of comparing users in two distinct groups (as in A/B testing), analyze different stages of the user journey to identify where the proposed change would have the most significant impact.
Break down the user journey into smaller steps (ex: Step 1: Account creation, Step 2: Onboarding completion).
Identify where drop-offs occur and use the data to test small changes in targeted areas of the funnel.
Scenario modeling: Model the potential impact of your proposed solution by simulating changes in metrics based on user behavior patterns and historical data.
Use historical data to model “what if” scenarios for the change. For example: “If 20% of users who currently drop off at Step 3 continue instead, how would completion rates change?”
Compare modeled results to real-world observations after implementation.
Step 6: Communicate the findings
Presenting your estimates clearly and persuasively is key to gaining stakeholder buy-in.
Start with the problem: “40% of users drop off during onboarding due to a complex form.”
Propose the solution: “Reducing the form fields from 10 to 5 will simplify the process.”
Share the estimated impact: “This could increase the onboarding completion rate from 60% to 66%, adding 1,000 new users per month and $20,000 in monthly revenue.”
Impact estimation template
Step 1: Define the problem
What’s the issue?
Example: 40% of users drop off at Step 3 of onboarding due to a complex form.
What’s the impact?
Example: This results in a 60% onboarding completion rate and a loss of potential revenue.
Step 2: Proposed solution
What’s the change you’re recommending?
Example: Simplify Step 3 by reducing form fields from 10 to 5.
Why does this solve the problem?
Example: User feedback indicates that long forms are a primary pain point.
Step 3: Target metric
What metric does this impact?
Example: Onboarding completion rate.
Step 4: Baseline data
Current metric value:
Example: 60% completion rate.
Affected population:
Example: 10,000 users start onboarding each month.
Step 5: Hypothesis
Expected improvement range:
Example: Reducing form fields will increase completion rates by 10-15%.
Data source/benchmarks for the estimate:
Example: Historical data shows similar changes improved completion rates by 12%.
Example: UX industry benchmarks indicate a range of 5-15% for form simplification.
Step 6: Modeled impact
New metric value:
Example: 66-69% completion rate (10-15% improvement on 60%).
Additional users completing onboarding:
Example: 1,000-1,500 additional users monthly.
Business impact:
Example: If each completed onboarding generates $20 in revenue, this adds $20,000-$30,000 per month.
Step 7: Evaluation plan
How will you evaluate the hypothesis?
Example: Conduct usability testing with a prototype.
Example: Run a cohort analysis to track behavior over time.
Step 8: Next steps
Develop a simplified prototype for testing.
Evaluate the hypothesis.
Share results and adjust the estimate as needed.
Estimating potential improvements transforms your research from observation to actionable strategy. It allows you to connect user needs with business goals, ensuring your insights drive measurable change. By presenting clear, data-backed outcomes, you build trust with stakeholders and increase the likelihood of implementation. This practice not only amplifies the impact of your work but also positions you as a strategic partner in product development. Over time, this approach strengthens your credibility and helps you deliver meaningful results for both users and the business.
Is there anything that’s worked super-well for you that I didn’t mention or that you totally agree with? Share in the comments 🙏
📚 Additional resources to explore
The Impact Membership : A space for user researchers who think bigger
You know your craft. You’ve run the studies, delivered the insights, and seen what happens when research is ignored. You’re ready to go beyond execution and start making real strategic impact but, let’s be honest, that’s not always easy.
That’s where the Impact Membership comes in.
This is not another free Slack group or a place to swap survey templates. It’s a curated community for mid-to-senior user researchers who want to:
Turn research into influence – Get insights to stick, shape product and business strategy, and gain real buy-in.
Break out of the research silo – Learn from peers facing the same challenges and work through them together.
Stay sharp and ahead of the curve – Dive deep into advanced research strategy, stakeholder management, and leadership.
Why join now?
You don’t have to figure this out alone – Every member is carefully selected, so you’re learning alongside people who truly get it.
Get real value, fast – No fluff, no generic advice—just focused conversations, expert-led sessions, and practical guidance you can use right away.
Make it work for you – Whether you want to participate actively or learn at your own pace, there’s no pressure—just a space designed for impact without overwhelm.
Membership fee: £627/year or £171/quarter
This isn’t just about keeping the lights on. Your membership funds exclusive research initiatives, high-caliber events, guest speakers, and a space that actually pushes the field forward.
Spots are limited because we keep this community tight-knit and high-value. If you’re ready to step up and drive meaningful change through research, we’d love to have you.
Stay curious,
Nikki