We thought it was about time we found out for ourselves if eye tracking is more than pretty heat maps. Can it provide any real value? Is worth the cost? And how does it compare to standard user testing in terms of overhead, prep, planning, execution, and analysis?
Through our research, we found some real value in our eye tracking testing – both in the results and the process itself.
Retrospective vs concurrent think aloud
Retrospective Think Aloud (RTA) is the method by which we allow users to complete their user test in silence, then replay their eye gaze and allow them to recall their experience. Users can think aloud after the test, as opposed to during it (known as Concurrent Think Aloud).
We wanted to find out if RTA would allow users to behave more naturally, leading to better (and more realistic) test results, thus leading to better design decisions.
Wait, what are you doing to my eyes?
We also had some concerns over what effects the presence of an eye tracker might have on user behaviour. Would users be scared of it, put off by it, or simply in awe of seeing their own gaze replay on screen?
We tested the usability of four news media websites. Users were tasked with accessing football-related content.
But while we were evaluating how the sites performed against each other, we were more concerned with the eye tracking process.
What additional insights can eye tracking provide?
Through eye tracking, we could see how quickly users found content (i.e. football results) on BBC, but really struggled to find it on Irish Times.
It wasn’t the use of eye tracking that uncovered this issue (we’re pretty sure we would have come to the same conclusion through standard user testing). There were two real benefits to the eye tracking method.
- eye tracking data allowed us to illustrate our findings in a new and compelling way
- RTA – the much bigger benefit
Staying quiet works
The RTA method allowed users to complete the tasks without any interruptions, which meant they completed them more quickly and more naturally than a standard user test. And the post test ‘think alouds’ was where we got the insight: the gaze replay acted as a great memory-aid for the users, and provided good context for their retrospective reasoning.
Because users could do the test more quickly, we could do more tests — which means eye tracking allows for the potential of a large scale study.
The test design and preparation took a little longer than usual, simply because of the equipment. We suspect that once we’re used to it, the prep time will decrease.
The test facilitation felt very different to our normal method, where we ask many questions as users complete tasks. Conversely, during the eye tracking tests, we took notes, which we later asked the users as they watched their eye-tracking play back.
We quickly learned the value of the pause button (and speed control) during the RTA. The playback was often too fast for the user to keep up with. So slowing down or pausing allowed us to explore certain things in more detail.
While the test themselves take less time, the post-test analysis may very well prove to take longer, if we decide to leverage on any of the many analysis tools that the software provides.
Our first impressions of eye tracking have been positive. We see a meaningful analysis with this type of testing. We are already conducting our next study and will be sharing our results, so watch this space. And please share your thoughts and experiences with eye tracking.