Competitor Usability Testing
Competitor usability testing is observing our target market using a competitor’s products or services to gain insights on the mindset of the user, common issues, and potential improvements in our own product. In some cases, this can reveal the need for an entirely new product. In other cases, we can gain insights on what parts of a competitor product are unnecessary.
This is almost identical to — but not to be confused with — competitive usability testing, which tests an existing product against existing competitors to establish which product is “winning.”
- What is the minimum feature set to solve the problem?
- How important is design?
- Value proposition
- Key activities
- Generative product research
The process of conducting competitor usability tests is the same as when testing our own product. In this case we apply the same methodology to a different purpose by testing our competitor’s products or substitute goods.
Where usability testing is an evaluative test of our own product and seeks to verify that the product functions sufficiently to deliver the value proposition, competitor usability testing is a generative method intended to create ideas for a potential solution.
For example, to generate ideas on how to create a better U.S. tax experience, we could conduct usability testing on tax preparation in Sweden or India. The results and observations would not tell us whether the U.S. tax experience is good (it is not), but it may give us ideas around whether or not to improve the comprehensibility of the tax code, the tax submission process, or the tax rules themselves.
Any results should be taken as generative rather than evaluative. Ideas generated tend to be highly unstructured and piecemeal, so they must be properly integrated into a viable solution.
Before building a solution, any ideas should be tested via alternative generative product research methods such as Solutions Interviews or Concierge Testing.
- Hawthorne effect (the observer effect): Users may behave differently when attempting to complete a task due to their awareness of being observed.
- Confirmation bias: Experimenters sometimes ask questions or create use cases in such a way that the user’s response/action confirms their preconceptions, hypothesis, or beliefs.