Project Details
This project was done as part of the lead-up to the launch of new streaming options on a website that featured media content that could be rented or purchased. I built a set of prototypes and mock-ups to explore various ways of browsing for media content available for renting or purchase on disc or by streaming could be organized. I then setup and analyzed the results from remote unmoderated user testing sessions.
The Team
- 1 UX Researcher/Prototyper
- Stakeholders included: Sr Manager of Marketing, Director of UX, Sr Director of Product Management, Chief Marketing Officer
Activities and deliverables
- Prototype
- Presentation of Research Results
Project Samples
Prototypes
Presentation
Objectives
I conducted this research project to help the team decide on how we should organize and present the navigation for media content that was available for renting or purchase on a disc (DVD/Blue-ray) and/or as an online stream.
There was already a recommendation from one of the sr marketing managers, and I solicited the input of other members within the organization to come up with a few other options. After setting up the research in a remote unmoderated testing tool, I consulted with the director of UX and set it to gather responses from at least 100 participants.
The prototypes were built in html/css/javascript. I started with the html5 boiler plate as a base and then created a low fidelity layout that would not distract from the navigation elements. In some previous testing I had seen people begin immediately engaging with content on the page so I avoided using the colorful movie/tv titles that normally would appear.
I set-up the test so that each menu would be compared to the control. There were two reasons for doing it this way. First, it would not be reasonable to have people conduct tasks across 5 experiences. Secondly, I was interested in the testing the protocol of remote unmoderated testing. Particularly I wanted to see how repeating a test might vary if at all.
Lessons Learned
One of the surprises in looking at the results was the amount of variation in the success of people using the control. According to theory, the results for the control should have been similar to one another between the groups. This was clearly not the case. It is possible that the variation that was used may have affected how people responded, but I did make sure to set a 50/50 split for which one was presented first to avoid such influence. It is also possible that people within a particular group were simply less proficient. However, this does underscore that the often quoted 5 users are enough for usability testing is not accurate. Despite this variance we did see that the Format based menu did poorly enough that it was ruled out for further research.