Designing User Interviews Tool for Lyssna

User interviews from Lyssna
 

My Impact

As the lead designer, I drove this project from concept to delivery by:

  • Conducting user research with over 20 participants to uncover pain points and unmet needs.

  • Creating a detailed customer experience map to identify opportunities for innovation.

  • Designing a vision prototype that aligned stakeholders and set a clear direction for the team.

  • Prioritising features into must-have, good-to-have, and nice-to-have categories, ensuring focus on user needs.

  • Collaborating with engineering to translate designs into actionable user stories and shaping MVP.

  • Iterating on the MVP based on user feedback, refining the product to maximise customer satisfaction.

 

Background

UsabilityHub is an Australian SaaS platform specializing in user research and usability testing. Known for its unmoderated usability testing tools like five-second tests and prototype tests, the platform had built a reputation for simplicity and ease of use. The company’s vision was to grow into a comprehensive "Swiss army knife" for user research, covering 80% of product research needs. However, the platform lacked support for moderated studies, a critical piece of that vision.

Problem

Through user feedback and churn data, it became clear that offering face-to-face interviews within the tool would add significant value. Researchers frequently described interviews as their most-used method, and without moderated studies, users often turned to competitors. This led to our main objectives:

  • Build a moderated study feature to streamline the logistical headaches in conducting a user interviews (like finding, screening, scheduling participants, to managing incentives).

  • Enhance the platform’s appeal and increase customer retention.

Discovery

Given the size and complexity of the project, we committed to a deep-dive discovery phase, spanning a full month. During this time, we interviewed over 20 people from diverse backgrounds and regions. We explored their research habits, challenges, and unmet needs, as well as what they appreciated about their current workflows.

From these conversations, we created a customer experience map to visualize how users typically moved through their research process. This map helped us uncover opportunities for improvement. Based on those insights, we defined jobs to be done and categorized them into three buckets:

  • Must have – Absolutely critical for users’ workflows. These should be good enough to pull any users from their current workflow to try this tool for free.

  • Good to have – Meaningful enhancements that could serve as competitive advantages but weren’t essential at launch. These features could draw in free users to ‘paid plans’.

  • Nice to have – Optional features that could delight users or serve a niche, which could be good candidates for paid features.

This categorization provided us with a clear and structured roadmap of priorities, allowing us to focus initially on the features that mattered most to our users. Additionally, it helped us significantly later in determining which features and functionalities should be included in our free plan in comparison to our paid plans.

Ideation

With these insights in hand, we defined a set of guiding principles to steer the design process:

  1. Familiar Patterns: Align the workflow with how users already set up unmoderated tests, so they feel at home right away. Also by utilising existing UI components we can build faster.

  2. Ridiculous Simplicity: We aimed to create an experience that would feel delightfully easy compared to their current experience. This will be critical for the tool’s adoption, retention and word-of-mouth.

  3. Focus on Core Needs: Prioritise the must-have features—invites, screeners, calendar integration, and automated communications. These formed the basis of what will be included in free vs paid versions.

  4. Trusted Integrations: Use integration with well trusted tools like Zoom, Google Meet, Teams, Google calendar, etc instead of forcing users into proprietary solutions.

The goal was to make the new feature feel like an extension of their existing workflow, not an entirely new process.

Design process

Armed with clear priorities, we spent another month creating a vision prototype. This prototype mapped the entire flow for moderated interviews, from participant invitations to scheduling and reminders. Every detail was carefully considered, and we documented assumptions, questions, and decisions along the way. The prototype served as a shared language across teams, helping everyone align on what success looked like.

The prototype became our single source of truth, allowing us to vet ideas early. Once we were confident in the direction, we translated these concepts into user stories for engineering to implement.

MVP & validation

To validate our approach, we focused on delivering a Minimal Viable Product (MVP). The goal was to get the product into users’ hands quickly and gather real-world feedback. The MVP included:

  • Participant Invitations and Screeners

  • Calendar Integrations

  • Automated Reminders

From day one, we noticed a high interest in exploring this new feature with more than half the active user base creating at least one ‘Interviews’ project, indicating our theory that such a feature is highly desirable. We followed the feedback using intercom and has been overwhelmingly positive, with one enterprise user saying, “It’s ridiculously simple—almost too easy,” which was exactly the experience we aimed to create and another user has reported that they successfully conducted a total of 20 user interviews within just one week, marking it as the fastest pace they have ever achieved in completing a project.

Howeever, we also foudn that in the quest to make the interface less intimidating and more friendly, some important controls were less visible than expected. This made users a bit uncertain about their choices and actions, even though everything functioned correctly. To improve the user experience, we quickly identified key usability changes like enhancing control visibility, adjusting default settings etc.

As feedback poured in from various internal and external users, we continued iterating the MVP, and prioritized our roadmap.

Outcome

The moderated interview feature significantly expanded UsabilityHub’s capabilities and delivered tangible benefits:

  • Increased Engagement: Adding ‘interview studies’ into Lyssna’s toolset clearly showed increased active user engagement. This clearly showed that the product was not only desirable but also resonated well with our target audience.

  • Reinforced Sentiment: Users praised the feature for its simplicity, reinforcing UsabilityHub’s reputation for being one of the easiest user research tool.

  • Streamlined Research: Administrative headaches, like scheduling and participant management, became much simpler for us as well as users. Our own product development was positively influenced with this tool.

  • Foundation for Growth: The MVP’s success gave us permission to build (from the users) on top of it with advanced features, like participant management, panel integration, automated incentives, and much more.

  • Increased Revenue: The integration of ‘Interviews’ with the ‘UserCrowd’ panel turned out to be an incredible success, significantly increasing revenue by attracting more users who were actively recruiting panelists for interviews.

  • Increased Retention: Customers now had a reason to stick around longer, as they could handle both unmoderated and moderated research in one platform, and the early signs were looking positive.

Previous
Previous

Designing a multi-platform Solar Monitoring App

Next
Next

Enhancing a subscription product