“If there’s a conflict in opinion between the customer (or engineering or visual design) and UX over an interface element or workflow, how do you typically resolve the issue?” My boilerplate answer has always been “User testing”. I’m arguing that the shopping cart should have a task menu, and the client wants to see one button for each task? Do some testing and let the numbers do the deciding. Seems perfectly reasonable and inarguable as long as we all agree that the testing method is valid, right?
If you are Jakob Nielsen, you would say “eh..sort of.”
In this little blurb from his site, Nielsen aggregates data from roughly 300 of his tests where both satisfaction and performance metrics were gathered. The overall results are a little surprising. Although there was the expected correlation between user satisfaction and user performance, it was only a 70% correlation.
I have often been guilty of being overly critical of qualitative research. I just could not understand the rationale behind gathering data that was not (at times) verifiable, repeatable, or representative of the intended user base. I’ve always championed objective, mechanical metrics because I’ve been in too many pitches where the most persuasive speaker sells the least practical, least forward thinking, least usable UI or UX solution.
But this article by Nielsen is a bit of a wake up call. He does point out that there are only weak paradoxes; in other words, there are results where satisfaction was high with a poorly-performing interface and there were also well-performing interfaces that users found unsatisfactory, but only to a certain threshold.
Nielsen draws his own conclusions from these findings, but my takeaway is that subjective and objective analyses are not telling two sides of the same story; instead, they are telling two parts of a larger story, and to exclude one is to miss up to a third of what’s really happening in your design. I was wrong, and now I’m a convert.