Quantitative Data Does Not Tell You the Why

Many organizations consider quantitative data to be clear and conclusive. Making decisions based on quant data is considered a best practice that organizations either follow or aspire to follow. While in many cases it is better than guesses and hunches, it simply does not tell you the whole story. Despite how much quantitative data you are capturing, you will still not have a full enough picture to make better product decisions. Relying only on quantitative data can ruin the experience as quickly as not using any data at all.


Data is shaped by the UI

As explained in a previous article, quantitative data from analytics tools may not capture all the data that you think they do. If you are sure that your analytic tool is properly capturing all the interactions that are important to your organization, you will find interpreting and understanding what that data means is another challenge that requires other kinds of research. While you may see that 30% of people pressed a button, you do not know why 70% are not. You may know that 50% of people used one feature, 43% used another feature, and another 39% used yet another feature, but do you know if the feature most used is the one that most people need or is it the easiest one to find?

Way back in time Jensen Harris described this problem in the work his team did in redesigning Microsoft Office that resulted in the now famous Ribbon. He described how the then previous design for MS Word with all its flaws was actually based on the data. He used a phrase “The data is shaped by the UI” to explain that what is made directly visible to people is what they will use. So when they would analyze the data, the usage of features that required digging into panels always trailed the features that were at the top of the menus which trailed the features in the toolbars. This meant that nothing would need to be changed if all you used was this quantitative usage data.

AB Testing is Fool’s Gold

Another type of research that positions itself as being definitive is AB testing. I am referring to the type of AB tests that you launch on your live site and get significant statistical data from each version. You show one group of people one design, usually the current one, and then another group of people a different design. You set a metric on the goal or purpose of the design and measure which design performs better. What could be more definitive? Well, again you don’t know why one worked better than the other. All you know is one design seemed to perform better than the other at least in regards to the goal that you set.

Now maybe it doesn’t matter to you why one version works better than the other. However, there are major short comings not knowing the why. If the winning design starts to perform poorly later on what do you do? Most likely you are running another AB test, which is just another guess. You think the new version is performing better than the last, when in reality the the previous winner is now underperforming. You may think you are optimizing the experience, when in reality you could be turning off half your audience with sub-par designs.

Another problem is again unless you are measuring everything, other factors may be negatively affected. Consider all the variables that could affect conversion and be affected by changes on a product details page; price, stock availability, specifications / details, quality & number of images, the number of options, up-selling, cross-selling, etc. Let’s say your design change includes moving the cross-selling products above other options, specifications and up-sells. Conversion rates improve for the cross sells, but drop for particular items or a whole category of items.

Sure the AB testing tool should tell you that conversion rate is up overall regardless of other factors (assuming you achieve statistical significance), but you do not know how that may affect the perception of other parts of the experience. Your visitors may begin to think you do not carry options or lack specifications or even may decide it is tedious to browse through the cross-sells and stop coming to your site first. How that affects your conversion over a longer period will never be recognized. The AB test only measures for a short term change not for long term changes in behavior.

Filling in the Picture

Having quantitative data is good, but as explained above there are many limitations. You need to use other types of research to fill in the picture. To answer the “why” you will need some qualitative data to go along with your quantitative data. There are a variety of qualitative observational methods from interviews to contextual inquiries, to a wide variety of usability study methods. I have found that a hybrid of a contextual inquiry / usability study as a good way of capturing the kind of feedback that fills in many gaps.

A contextual inquiry is typically done around existing experiences. Normally this involves asking a person to show you how they currently accomplish a task or goal. It may involve a tool or application that you or your organization created or it may involve a competitor’s offering. You will also be able to observe any other resources that a person uses while performing the activity. Quite often people do things outside of the UI to complete an activity. The goal of a contextual inquiry is to learn how people perform a task or activity and gain an understanding of why they are engaged in that task / activity.

A usability study is similar, but generally the tasks may be more prescribed. Particularly if you are having them use a prototype that is not fully functional. Again, you may also observe them using other resources as well as gain an understanding of what they are thinking while performing the task. The goal of a usability study is to evaluate if people are able to successfully use an interface to accomplish a task / activity.

I’ve adapted the usability study to start off with a more open interview of the contextual inquiry by having people describe their goals and activity. I ask them to show me how they currently do that activity with whatever resources they currently use to accomplish their goal. Then I adapt as much as possible the first usability task to match how they described the task themselves only now I ask them to use the prototype or alternative design. Using this method helps me learn the language and terms they use as well as understand the motivations and why they are trying to do something.

References:

Grading on the Curve: Why the UI part 8, Jensen Harris, Retrieved on 2022/11/25 https://docs.microsoft.com/en-us/archive/blogs/jensenh/grading-on-the-curve-why-the-ui-part-8

Usability Testing: Retrieved on 2022/11/25
https://www.usability.gov/how-to-and-tools/methods/usability-testing.html

Contextual Inquiry: Retrieved on 2022/11/25
https://en.wikipedia.org/wiki/Contextual_inquiry

Go to the comment form

Comments are closed.

We use cookies to analyze our traffic. Please press the accept button to continue your visit. View more
Cookies settings
Accept
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active
This site uses cookies to track basic interactions. No personal data is collected or saved on this site. If you submit an email to be contacted it will only be used if a response is requested.
Save settings
Cookies settings