So my team’s task is “How can we improve this certain kind of feature in one of our products?”.
We started out with a go through of the existing feature. In detail. Discussed what was good and what we felt could be improved. Here one discovers the value of having many different stakeholders’ eyes on the task. As always, when there are different approaches and opinions of how to simplify and improve, it adds great value along the way. This is from what I’ve learned from the marathon part of the user-centered design process and the natural first part to gather and share insights.
After lively discussions and presentations of a variety of exciting ideas and approaches, we began with a prototype. When we felt satisfied with our wireframe we wanted to get a sense of how it would be received by our users and began preparing for user testing.
Back to our prototype. So now we have a pretty decent wireframe and our next step is to pitch it to our stakeholders. Hopefully we get a go and can set up a team to do the design, make the implementation and provide our solution with tracking and add hypotheses for A/B-testing. We think, and have some preliminary proof based on our testing, that we have come up with a feature that adds more value for our customers.
But it’s not up to us as a company to decide what´s good and what’s not – we measure regularly and iterate by observing users actual usage of the application or service. I personally widened my understanding of collaborative user centered work and have learned new things that I will bring into my daily work. And this is something that any team could arrange regularly as a fun and team-building event – to combine business with pleasure.
Me together with an end-user – getting feedback on our ideas
In our case we had two users that felt appropriate to the task. They were in the right target group and had just started out with the product.
And to see how users take on our solution is very interesting. Details in their body language and to listen to their thoughts is very valuable. And in our case, it felt like we were definitely on the right track. Everything went exactly as it supposed to.
But how do you find significance in user testing, was one of my thoughts related to this work during the marathon? In short – how many user tests has to be done in order to get a reliable result? If we equate this with, for example, a customer survey my guess it that it would need around 100 respondents to evaluate in order to get a reasonably accurate result. So how can this be done in a more effective way? How do we combine traditional user testing with digital tools to drive improvements, A/B-testing, performance and so on.
But in discussions with UX-design professionals there are exaggerated expectation on significance when doing user test on this level of work. And we are not doing scientific research work here. The most important goal is to validate our hypothesis and gather insights. The experience amongst the UX-professionals is that you need a sample of 25-30 persons to get proper tendencies. Sometimes even less, depending on testing method used.