How to run a quantitative usability test
And use it to continuously prove your impact
Hi, I’m Nikki. I run Drop In Research, where I help teams stop launching “meh” and start shipping what customers really need. I write about the conversations that change a roadmap, the questions that shake loose real insight, and the moves that get leadership leaning in. Bring me to your team.
Paid subscribers get the power tools: the UXR Tools Bundle with a full year of four top platforms free, plus all my Substack content, and a bangin’ Slack community where you can ask questions 24/7. Become a paid subscriber if you want your work to create change people can feel.
“But it gives us numbers.”
“No thanks.”
That’s the first thing I said when faced with the suggestion of conducting a quantitative usability test.
I was more than happy with my conversation-filled qualitative usability tests. Asking people about their immediate reactions, how they perceived the screens, and their general thoughts on what was confusing and missing. It was a routine I quickly felt comfortable in and enjoyed.
But there’s always a but, isn’t there?
It came when I was sitting with my team, and we were chatting about a flow that many of our users were having trouble with. I had triangulated some data from previous research where the topic had come up, customer support tickets, and also from account managers.
I told my team that we had enough to understand most of the pain points, especially the important ones, and make changes. I was at a point in my career and at the organization where I was privileged enough to be able to say no to user research requests.
Luckily (at the time, it felt unfortunate, but for my career, it was a good thing), my manager was sitting in the meeting. He’s one of the most fantastic managers I’ve ever had (Hi, John 👋🏻), and he asked me a smoldering question:
“How will we know if the changes we make improve the usability?”
Of course, John already knew the answer to his question, but he directed it to me. I knew what he was going for, but I was terrified of the answer.
All I wanted to do was make the changes and then ask the users if the changes we made were helpful — I was even willing to sift through the customer support tickets for the next few months to see if complaints decreased. Anything to stay away from the numbers.
But my team was beaming. This was exactly what they wanted: a clear and straightforward way to measure usability and progress. There was no backing out. It was finally time for me to conduct quantitative usability tests.
And I am so glad for that push because they have become an absolutely essential part of my user research toolkit and have also helped me become a well-rounded (and promoted!) user researcher.
What is quantitative usability testing?
Usability tests, on a whole, are about having participants attempt to do the most common and important tasks on a product/service. While you conduct the test, you, as a researcher, are looking to find problems the participant runs into during the test. You then take these problems to your team and, together, brainstorm and find ways to fix the usability issues — which are sometimes simple and other times complex.
With qualitative usability tests, you are talking to the participants and describing the different reactions, perceptions, or issues they encounter.
However, with a quantitative usability test, you can still describe the problem, but you measure:
How many people encountered a problem
How many people were able to complete the tasks
The time it took them to complete tasks
How many errors participants ran into
What types of errors participants encountered
Participants’ perceptions of usability
With quantitative usability testing, you can find out a lot of important information that can help you generate the impact of your research. For instance, when I was working at a travel company, we conducted a quantitative usability test on our checkout flow.
We found that people were taking a long time to fill out information that ultimately wasn’t that relevant and, thus, dropping off and abandoning the flow for a competitor that was easier to use.
Based on these results, we made some significant changes and retested the flow after the improvements were made. We reduced the time it took to fill out information by 50% (which was faster than people could do on the competitive product as well), and we reduced abandonment by 35%. This meant that we increased revenue by £75,000 annually.
Big impact.
When it comes to measuring usability, we can break that down into three major areas:
Effectiveness: Whether a user can accurately complete tasks and an overarching goal
Efficiency: How much effort and time it takes for the user to complete tasks and an overarching goal accurately
Satisfaction: How comfortable and satisfied a user is with completing the tasks and goal
Below, I walk you through:
How exactly to run a quantitative usability test for your team (with examples from my work)
The most important metrics for you to know
How to write a quantitative usability tasks/scenarios that give just enough detail to your participants
How to analyze the results for tangible benefit to your team
Exclusively for paid subscribers


