Guide to Self-Assessment for 1x1 Interviews
How to give yourself feedback on your 1x1 interviews
👋🏻 Hi, this is Nikki with a 🔒 subscriber-only 🔒 article from User Research Academy. In every article, I cover in-depth topics on how to conduct user research, grow in your career, and fall in love with the craft of user research again.
One of the cringiest things in the world is listening back to my old 1x1 research interviews. Not only can I not stand the sound of my own voice but, wow, I made a lot of mistakes over and over and over (and over….you get it) again.
For a large part of my career, I was a solo user researcher and didn’t have a user research manager. I was typically managed by designers, product managers, data analytics, or marketing. With this situation, I really struggled to find ways to improve my craft. Yes, I could ask my peers and manager, but I was meant to be “the expert” and there was no one at my organizations with more expertise than me.
(Sometimes that in and of itself was a very scary and overwhelming thought).
So, when I would look around, trying to figure out how I could improve the actual concrete craft, I was left feeling alone and confused. How was I supposed to improve my interviews when no one felt like they could give me feedback (or had time to, for that matter)?
I’ve heard this same point reiterated by many user researchers on their journeys to improvement — how to improve when no one is there to give you advice?
Why Self-Assessment is Important
Being aware of your skillsets and what you need to do to improve is honestly one of the cornerstones of personal and professional development. Have you ever tried to give feedback to someone who didn’t think they had to improve? Or have you ever continued to try the same approach repeatedly with crappy results?
If you’re nodding, I’ve been there too.
By being open to self-assessment, and taking the time to hone your skills, you are able to stay ahead of the (competitive) curve. I found it hugely beneficial when I could clearly articulate my strengths and weaknesses within job interviews or performance reviews — highlighting what I needed to improve didn’t mean I wasn’t a good researcher, but showed others that I had a plan for continuously progressing in my career.
Self-assessment is not just a practice; it’s a mindset that fosters continuous learning and improvement. It helps you stay competitive, gather higher quality insights, grow professionally, and enjoy benefits that make your work more efficient and satisfying. Embracing self-assessment equips you to be a better user researcher.
DIY Assessment
I quickly learned that, if I was going to get better, I had to find a way to DIY my feedback. As much as I just wanted to sit and hope someone would give me the magic bullet of feedback, I had to face the reality: as someone who enjoyed being a solo and first UXR, I wasn’t always going to have the luxury of a user research manager or peers to give me feedback.
Luckily, at one point in my career, I had a fantastic manager who turned to me after I had finished some of my first generative research projects and said:
“It’s time for you to listen back to your interviews.”
I laughed because, well, I could barely even listen to my ten second voicemail message (back when that was a thing and I spent like three hours crafting a funny yet cool voicemail), so how was I meant to listen to hours of my own voice? Bleh.
But, because my manager was awesome, I did it (still begrudgingly, of course) and, I must admit, I learned a lot. However, I got out of the practice when I had to juggle more and more work across teams and the organization.
When I started thinking about DIY-ing my feedback, my old manager’s voice echoed in my head and I realized I had to go back to assessing my own interviews through listening to them.
And I did. But, while that had been helpful in the past to understand a bit more of how I could improve, just listening to my sessions wasn’t cutting it. I needed structure and something to provide more actionable feedback.
I went back to the drawing board and did some research, finding. Steinar Kvale's criteria of a good interviewer. I used this as a springboard to create a structure for assessing my interviews more effectively and in a way that I could gather actionable feedback.
Psst — you don’t have to do everything alone! Check out my:
The Interview Assessment Structure
To give myself more structure, and using Kvale’s resource as well as my previous experience assessing myself and others, I created an interview assessment sheet with overall aspects to focus on, as well as examples for each aspect:
Familiarity with the Topic
You have researched the domain you are about to enter, including industry trends and jargon, and awareness of potential competitors. If applicable, and you are conducting usability tests, you have a functional knowledge of the prototype or product you will be testing. Being well-prepared ensures you can engage deeply and knowledgeably with your participants.
Being familiar with the topic also means understanding the context in which your users operate. This allows you to ask more relevant questions and understand the nuances of their responses. It’s about seeing beyond the surface-level interactions and diving into the underlying needs and motivations of your users.
Example 1: While preparing for an interview about a new financial planning app, you delve into current financial planning trends, familiarize yourself with common jargon, and study competitor apps. This way, when participants mention specific financial strategies, you can follow along and ask insightful follow-up questions.
Example 2: If you are testing a new e-commerce website, you spend time exploring the site, understanding its features, and identifying potential issues. When a participant struggles with the checkout process, you can quickly identify if it’s a known issue or something new that needs addressing.
The Interview Was Structured
You start the conversation by explaining what the participant can expect and lay out the purpose of the discussion. The beginning of a research session can make or break the entire interview. If the participant feels you are robotic and reading from a script, they may have a hard time opening up to you. Conversely, if you don’t adequately explain what the research session is about, you leave the participant in the dark, which can feel very unnerving.
A well-structured interview ensures that all necessary topics are covered without making the participant feel rushed or ignored. It also helps in keeping the conversation focused and on track, which is particularly important when you have limited time with each participant.
Example 1: Before diving into questions, you explain, “Today, we’ll talk about your experiences with online shopping. There are no right or wrong answers, and your feedback will help us improve our service.”
Example 2: You structure the interview with clear sections, starting with warm-up questions to make the participant comfortable, then moving to more specific questions about their experiences, and ending with a wrap-up that allows for any additional thoughts or questions.
Everything Was Clear
The questions you asked were short and straightforward, which is especially crucial in usability testing. We want our questions to be as open-ended as possible. I use the TEDW method:
Tell me about…
Explain…
Describe…
Walk me through.
These open questions lead to stories and conversations, which can give us much needed context and right insights, versus asking a continuous stream of yes/no questions.
Clear questions prevent confusion and ensure that the participant understands what is being asked. This clarity is vital for gathering accurate and useful data. It also helps in maintaining a natural flow of conversation, which can lead to more genuine and insightful responses.
Example 1: Instead of asking, “Do you like using our app?” you ask, “Can you walk me through how you use our app on a typical day?”
Example 2: In a usability test for a new feature, instead of saying, “Does this feature work for you?” you say, “Describe how you would use this feature to complete a task.”
With usability testing, tasks need to be clear and directive. Creating small scenarios behind each task helps participants relate better and provide more natural responses.
Example: IKEA’s usability test improved significantly when they changed “find a bookcase” to “You have over 100 books strewn around your apartment. Find a way to organize them.” This shift led participants to use the site more naturally, providing more valuable insights.
Few or No Interruptions
Keep reading with a 7-day free trial
Subscribe to The User Research Strategist to keep reading this post and get 7 days of free access to the full post archives.