By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Preferences
Wudpecker

How We Conducted 60+ User Interviews (+ What We Learned)

Published
September 25, 2024
Read time
4
Min Read
Last updated
October 13, 2024
Jenna Pitkälä
How We Conducted 60+ User Interviews (+ What We Learned)
Table of contents
Share article:

Within 2 months, we interviewed over 60 Wudpecker users to hear their feedback and experience on using our product. That’s about 1.5 interviews per day!

I’m going to break down how we…

  • found users to interview
  • conducted the meetings
  • used the feedback to develop our product iteratively

… and what we learned along the way.

How We Found Users to Interview

The first question to tackle was: How will we get users to spend their precious time meeting with us?

We’d have to send eye-catching emails where we make it tempting to book a meeting with us. Let’s see what this means in practice.

Flexible time slots

We approached scheduling with flexibility, allowing users to choose between 15 or 30-minute time slots—making it easy for them to commit to a meeting without feeling overwhelmed. Most of the time, good discussions lasted 30 to 45 minutes. Usually everyone was happy extending the meeting because their schedules allowed it and the conversation was engaging.

Gift card incentives instead of multiple followups

We could either go with a slow but budget-friendly, or expensive but fast approach. Our priority was speed, so we offered a $50 Amazon gift card as an incentive.

We wanted to save time. And time we did save.

The result: 90% of the booked meetings are done after one email. There was barely any need for follow-up to badger our busy users.

4% of all delivered emails got us a meeting with a user, while the average email CTR is around 2.6-3 %.

On average, for every 25 emails delivered, we got 1 meeting booked.

As we started sending emails, we also formed an understanding of the ratio between emails sent and meetings booked. We got into a nice rhythm of when and how many emails to send to which cohorts.

screenshot from sendgrid
Based on this Sendgrid data, we can see that most engagement for this particular email happened on the same day as it was sent, and all engagement stopped about a week later.

Tools We Used:

  • posthog for fetching contacts
  • cal.com for booking a meeting with us from a calendar
  • sendgrid to send emails to the contacts

Which Users Should We Reach Out to?

Sending the same email to literally all users at the same time wouldn’t be very efficient. Instead, we should have a more targeted approach to sound less spammy and send the email to a smaller group at a time to test the waters.

Here are a few examples of our different cohorts.

(1) Recently Churned Users

We first sent emails to people that had stopped using our product completely, but only recently, so everything would be fresh in their memory: how they had intended to use Wudpecker, and why they ultimately stopped. This kind of feedback would help us have more perspective while developing our product further.

Why: We thought we shouldn’t stay in a positive bubble. We have to hear from the people that were not satisfied with our product to better see the gaps in our thinking. While some people might churn because our product simply wasn’t what they ended up needing, others might have become regular users but faced some roadblock instead. We’re interested in finding and solving those roadblocks.

(2) Power Users

Next, we reached out to power users: the ones who use Wudpecker regularly and tend to explore most of its features. These users know the product inside and out, so they’re in the best position to tell us what’s working well and where we might need to improve. By gathering their feedback, we can figure out which features are hitting the mark and which could use some tweaks or new additions.

Why: Power users usually represent the ideal use case of the product, so their feedback helps us understand what keeps them engaged. They’re also the ones most likely to uncover hidden pain points or areas for optimization that casual users might not even notice. Their insights help us make sure we’re improving the right features without alienating the users who rely on them the most.

(3) Users Who Paused But Came Back

We also focused on users who paused using Wudpecker for a while but then decided to return. This group is interesting because they went through a phase of disengagement but still found value in the product that brought them back. Their feedback gives us a better sense of why people leave and, more importantly, what brings them back, helping us build better strategies for retention and product development.

Why: These users have seen both sides of the journey—what made them stop using the product and what convinced them to give it another go. Understanding that can help us identify any friction points that might be causing others to leave for good, while also showing us what parts of the product are delivering real value. This group of people might’ve even tried out competitors in the meantime. Through them, we can get a clearer picture of how Wudpecker stacks up in the market and what makes people choose us over the alternatives.

⚜️ Lessons Learned ⚜️

We found that different cohorts of users offer different perspectives, so it’s important to choose who you’re reaching out to.

Recently churned users help highlight roadblocks, power users show what’s working well, and users who left and returned give insight into what makes your product competitive. Understanding who to interview and why is just as important as the questions you ask.

How We Ran the Meetings

Overall, we went through 2 phases when conducting the user interviews in question.

First, we wanted to understand the broad strokes of how people use our tool, Wudpecker (which is an AI notetaker that we also used to record these sessions).

Then, we explored more specific user experiences on a deeper level.

Generally, all our meetings followed this basic structure:

  1. General level – figuring out the user's context and what kind of feedback we can expect to get
  2. Deeper insights – asking more detailed questions to fully understand needs, pain points and so on
  3. Validation – getting perspective on our product decisions

However, depending on the phase, the specific questions we asked varied.

photo of Hai and Jenna conducting a user interview

Phase 1: Scoping Out How Our Tool Is Being Used

In this phase, we focused on getting a broad overview of how users were interacting with Wudpecker (an AI notetaker) across different contexts.

Example Questions:

  1. General level: What kind of meetings do you use Wudpecker for?
  2. Deeper insights: In those meeting notes, what insights are most valuable to you? Can you give an example?
  3. Validation: Are the summaries we provide detailed and accurate enough? Have you had to edit them?

This phase was crucial for establishing a baseline understanding of Wudpecker’s usage. It helped us identify common patterns across different types of users and their pain points.

We knew we had reached a saturation point when the feedback from users became repetitive and predictable. At this point, we had gathered enough data about the typical ways people were using Wudpecker and their feedback.

Now we could move on to a different approach of interviewing.

Phase 2: Improvising and Digging Deeper into Individual Use Cases

Once we had the general landscape mapped out, the second phase of our interviews focused on uncovering deeper insights by allowing for more improvisation and flexibility.

Instead of sticking to a rigid script, we tailored each conversation to the user's specific context and the feature we wanted to explore.

We could pinpoint specific issues with features and onboarding processes that we hadn’t fully understood in the first phase.

How This Phase Was Different

  • Greater flexibility: Instead of rigidly sticking to a preset list of questions, we allowed the conversation to flow naturally based on the user's responses. This approach helped us uncover more detailed insights into specific use cases.
  • Going beyond surface-level questions: Previously, we didn’t have time to get into the reasons a user might, for example, copy parts of their Wudpecker notes to a CRM tool. Now we did, and we also tried to understand the reasons behind such behaviors. What does the other tool offer that we don’t? Is it reasonable to imagine Wudpecker one day providing that same feature and eliminate the need to use the other tool?

Example Questions

  1. General level: What were your first impressions of the new Collections feature?
  2. Deeper insights: Can you walk us through exactly how you tried to use the feature and why?
  3. Validating: Did the onboarding process for this feature feel clear or confusing? How could it be improved?

⚜️ Lessons Learned ⚜️

(1) Change Strategy When You’re Not Learning Anything New

Initially, we focused on understanding general usage patterns, but once we had enough high-level insights, we dug deeper into specific user experiences and pain points. This shift ensured we weren’t just gathering surface-level information but also more detailed, actionable insights.

(2) Concrete Examples Are Invaluable

Encouraging users to share their screens or walk you through their exact process with examples provides valuable context for understanding their interaction with the product.

Some users may be hesitant to share too much detail, so it’s important to reassure them that they don’t need to disclose anything confidential. If they’re still uncomfortable, it’s fine to leave it at that—no need to push further. However, it’s always worth asking, as these real-world examples are the most effective way to understand what the user is trying to communicate.

(3) Expect the Unexpected

Conversations don’t always go as planned, and that’s perfectly fine. While it’s important to let the conversation flow naturally, it’s equally crucial to know when to gently guide it back on track. Flexibility is key—sometimes tangents reveal the most important information, but other times it’s more effective to refocus on your main questions. Keep your core objectives in mind, or you risk getting sidetracked by unexpected detours.

How We Improve Our Features Based on Feedback

Improving a product and its features isn’t a one-time task—it’s an ongoing process of listening, iterating, and refining.

At Wudpecker, we use a simple, repeatable cycle to continuously improve our features based on user feedback.

This process involves three key stages: Plan & Develop, where we define and build the next steps; Test & Gather Feedback, where we release updates and collect insights from users; and Refine, where we analyze feedback and adjust our vision for the feature.

Each cycle brings us closer to aligning the product with what users truly need.

a visualization of the iterative process and its 3 steps

Feature Example: Collections

For context, Wudpecker is an AI notetaker that records work meetings, automatically generates meeting minutes with AI, and saves the notes in your account.

Once the calls start piling up, it makes sense to be able to organize them into folders, or as we call them, Collections.

Here’s a simplified version of how our iterative process started out with Collections.

First Cycle: The Basics

1) Plan & Develop

We had heard from some users that they use a separate tool to copy and paste Wudpecker notes and organize them into folders. In addition, we personally had the need to find our calls in a more systematic way.

We would develop a basic version of Collections (with the ability to create a new collection, add calls to one, and so on).

a gif showing how the first version of collections looked
What the first version of Collections looked like on Wudpecker

2) Test & Gather Feedback

In our following user interviews, we heard that technically the Collections feature sounds great, but it’s too much work to manually add every single call to their respective folder. We received practical ideas, such as grouping calls based on the participants’ email domain.

We also identified the need for people to have more personalized meeting summaries. Not just for each user, but for each collection. This should also be automated.

So, we had a couple of different new additions to Collections.

3) Refine

Based on the feedback, we were going to add Settings for Collections. The settings would allow the user to…

  • customize the summaries for all calls in each collection based on their own instructions
  • automate organizing calls into each collection based on certain requirements

Second Cycle: Settings for Each Collection

1) Plan & Develop

We created settings for collections with a different UI and new options: customizing summaries and automatically adding calls to the collection.

a gif showing the newer version of collections
Roughly how the second iteration of Collections looked

2) Test & Gather Feedback

Luckily, soon after the changes, we got a few users to share their screens and walk us through their thought process while navigating the new UI. What we soon learned was:

  • Some users hadn’t realized that Collection settings existed and that they could automate more processes
  • Others who had found the settings had misunderstood the instructions

3) Refine

We redid the copywriting of the settings and brainstormed ideas for onboarding users on new features in general.

After this, the third cycle would start. Then the fourth, the fifth, and so on.

We can't know how many cycles there will be in the end. This was a demonstration of what different stages of product development can look like in practice.

While the work is never truly "finished," this iterative approach ensures we’re always improving and adapting to meet our users' needs in meaningful ways.

⚜️ Lessons Learned ⚜️

No matter how hard we try, we can’t help but be limited by our own mental bubbles and assumptions about the products we’re building and how they will be received. We have to continuously look for outside perspective.

Sometimes changes and details that seem trivial to us might make a big difference in how users realize what they can do with our product. When you remember to keep an open mind and work with an iterative process, small changes will pile up over time and have a huge impact.

Conclusion

At Wudpecker, improving our product is a continuous, iterative process driven by real user feedback. Over the course of this year, we’ve learned valuable lessons from interviewing more than 60 users and changing our interviewing strategy along the way.

The key takeaway? Communicate with your users, adapt to the feedback, and continuously improve. If you don’t, you’re going to fall behind in competition and the quality of your product. We have a real incentive to try to perfect our product since we personally use it every day. But even if you don’t, learning from your users is still going to be worth it.

Automatic quality online meeting notes
Try Wudpecker for free
Dashboard
How We Conducted 60+ User Interviews (+ What We Learned)
Min Read
How We Conducted 60+ User Interviews (+ What We Learned)
Min Read
How We Conducted 60+ User Interviews (+ What We Learned)
Min Read