In the early 90s, Kellogg’s cereals became synonymous with breakfast in several countries around the world but faced a rude jolt upon entering the Indian market. Why? Because it missed one important nuance. Indians predominantly had hot milk with sweetener for breakfast, a habit cultivated since childhood and carried on till late teens. But what happens when you add hot milk to cereal? It becomes soggy and chewy, basically extremely unappetising. So Kellogg’s made its way out of the Indian home quicker than it entered.

This is a classic example of why thorough usability testing is crucial for a product’s success. Relevance, recall and relatability are key to the success of any product.

In this blog, I will take you through the Usability Testing (UT) framework we use while designing a product, and how we ensure our results are accurate and free of respondent bias.

Understanding Usability Testing

Technically speaking, it is testing a product with users, before it’s developed or launched, to get a clear understanding of how real users interact with the interface. However, it can also happen at other stages of product development based on requirements.

At Meesho, we believe in the mantra of User First, so it’s imperative that we intently listen to the user’s voice at all times. From designers to product managers to just about any stakeholder you can think of, we all directly interact with users to understand their challenges.

Generally, we conduct UTs by showing users visual screens, or by asking them to test our prototype via screen sharing or in-person interviews. That helps us gauge their intent throughout the journey. This kind of interaction is especially important for us — nearly 80% of Meesho’s consumers and sellers come from Tier 2+ towns, a vast cross-section of them new to e-commerce. By virtue of being the first-movers in this cohort, we discover unique challenges and, often, unexpected insights. Here’s an example.

While working on a project around price recommendations, we identified that suppliers were facing challenges in getting a read of how competitors’ pricing might be impacting their sale prospects. We were giving them recommendations, but were they really understanding and using them to good effect? We decided to take a deeper look.

During initial research, we focused on sellers’ comprehension of content and design. Instead of asking them leading questions like “What do you comprehend from the Action tab?” or “Will you use it as reference to understand the competition?”, here’s what we did.

  • We asked sellers how they accepted price recommendations. For example: “Do you know at what price are other sellers selling this product?” This made them think holistically instead of considering just the Action tab.
  • On the ‘Compare Price’ screen, we wanted to check if sellers were able to fully comprehend the impact of accepting our recommendations. Since we made them comfortable at the outset with a nice, relaxed conversation, they were able to share their points of view without the fear of being judged. We learned that they wanted more context, such as ratings and other parameters for competition, before accepting price recommendations.

Based on our learnings, we bolstered the design with improved messaging and visual elements that highlighted the magnitude of missed opportunity. Some key updates we did in the redesign:

  • We highlighted ‘Losing Orders’ pill in red to create a sense of urgency. Since the number of orders is the primary motivation of sellers, this gives them a clear idea as to what they could gain from pricing their products right.
  • We added ‘ratings’ and ‘losing orders’ pill along with a ‘trending up’ icon to highlight numbers in the price comparison modal , after the user clicks on ‘Compare Price’ CTA.

Old Design


New Design


Names of sellers, products, pricing, etc. in the above visuals are fictitious and meant for representative purposes only

Our UTs gave us tremendous insights on how we can make the screen more insightful and actionable for sellers. The final product gave impressive results.

  • Users were able to spot the ‘Action’ column and comprehend the information easily. They didn’t have to go back to the Meesho app or website to compare prices, which was the behaviour earlier.
  • Apart from product parameters, they were also able to see and compare ratings and number of orders, which really contextualised the information for them.

Conducting an effective usability test

  • Robust UT requires the respondent to be comfortable and feel at home. A good conversation starter I use is: What was your last purchase from Meesho? In the case of sellers, it could be: When was the last time you used our seller dashboard?
  • Keep it simple. Instead of confusing users with tech jargon, use visual cues such as the colour of the mic/arrow for screen sharing, or its positioning relative to a known icon. I sometimes share my own screen to explain how they can mute/unmute.
  • While explaining the agenda of the call, one common mistake is to start with statements like: “We are going to show something “new/interesting/exciting”. This creates a bias at the very outset, and users would sub-consciously start looking for differences.

Eliminating biases

Ideally, you should let the respondent guide you through the overall journey and share their views. You should nudge them whenever required to enunciate their thoughts. However, such sessions — where the respondent leads the conversation — are a rarity and most interviews require you to be in the driver’s seat. Here’s how you should lead the test:

  • Ask open-ended questions so that users can freely talk about their experience. Give them room to express themselves and space to voice out their opinions.
  • Make sure to control your reactions, especially when receiving negative feedback about your product or work. Users might stop sharing honest feedback if they sense you are getting defensive.
  • Instead of asking leading questions, give your users a task to perform. I prefer this approach while doing an A/B test for screens — it clearly points out which version works better.
  • In scenarios where the user asks “Should I click here?”, resist the temptation of saying “Yes”. Instead, ask them: “What do you think would happen if you click here? Where would you click if you want to perform ’X’ action?”

Usability testing is an evolving process. While each conversation will have its own set of challenges and speed bumps, it will eventually give you valuable experience and lessons that will come in handy in the future. Trust the process.


Author: Swarnima Shukla, with inputs from Rahul Srivastava
Blog cover: Rahul Prakash