Easy as the Single Ease Question
Visiting a website or using an App involves a series of small tasks—navigating to the sports articles or checking email. Some tasks are easier than others. But we’ve all experienced frustrating tasks we couldn’t figure out how to complete—like changing the method for receiving lab results. Fortunately, there’s an easy way to accurately gauge user perception of a task: the Single Ease Question (SEQ).
In a usability test, a Single Ease Question is a seven-point Likert scale question asked immediately after a user attempts a task. Page Laubheimer, a User Experience Specialist with Nielsen Norman Group, offers two benefits of SEQs:
- They allow you to compare parts of your interface that are perceived as most problematic.
- Because it was just completed, the task is fresh in the user’s mind, so he or she can provide a clear indication of his or her attitude toward the experience.
Below is an example of the SEQ:
The other common way to phrase an SEQ is, “How easy or difficult was this task?” According to Jeff Sauro, Ph.D., who first researched the Single Ease Question a decade ago, the results between the two phrasings are “indistinguishable.”
Sauro and Joseph Dumas, a User Experience Consultant, conducted a study comparing one-question rating types in which twenty-six participants attempted the same five tasks with two different software applications. They found that the Single Ease Question performed well.
“The popular Likert question [SEQ] was easy for participants to use, was highly correlated with other measures and, with a seven-point format, did not show a ceiling effect for these tasks and products. What’s more, it was easy for test administrators to set up and administer in electronic form” (Sauro & Dumas, 2009)
Sauro also notes that research has shown that “SEQs are reliable across prototype fidelity levels (from low to high).” Fidelity refers to the level of detail or “realism” of a prototype. In other words, the SEQ is versatile (or technology agnostic). It can be used for mobile devices, websites, software, and paper prototypes.
SEQ in Practice
Adam Wright of Brigham and Women’s Hospital and Harvard Medical School set to develop a new tool for reviewing microbiology laboratory results. His group conducted a “scenario-based usability evaluation” to compare their new tool with the existing tool. As part of their evaluation, they included a SEQ for each scenario. Their new tool scored an average of 5.65 compared to 3.78 for the older tool. The higher the score, the easier it was to complete the task. (Wright et al., 2018)
User Responses
The following are some quick points on user responses based Sauro’s research.
- Users rate a task as more difficult if it takes them a long time to complete it or if they fail to complete it.
- The average score for an SEQ is approximately 5.5.
- Users respond differently. Some users rate everything a 6 or 7, and about fourteen percent of users rate a task as “very easy” even after they’ve failed to complete it. Thankfully, these differences average out across tasks and products.
The Single Ease Question is a versatile, simple, and effective way to measure user attitudes. Following up the SEQ by asking users who give a rating 5 or less “Why?” can help diagnose problem areas. True to its name, the Single Ease Question is easy. It’s easy to answer. It’s easy to administer. And it’s easy to score.
References
Laubheimer, P. (2018, February 11). Beyond the NPS: Measuring perceived usability with the SUS, NASA-TLX, and the single ease question after tasks and usability tests. Retrieved from https://www.nngroup.com/articles/measuring-perceived-usability/
Sauro, J. (2010, March 2). If you could only ask one question, use this one. Retrieved from https://measuringu.com/single-question/
Sauro, J. (2012, October 30). 10 things to know about the single ease question (seq). Retrieved from https://measuringu.com/seq10/
Sauro, J. (2018, October 30). Using task ease (SEQ) to predict completion rates and times. Retrieved from https://measuringu.com/tag/seq/
Sauro, J., & Dumas, J. S. (2009, ). Comparison of three one-question, post-task usability questionnaires. Retrieved from https://measuringu.com/wp-content/uploads/2017/07/Sauro_Dumas_CHI2009.pdf
Wright, A., Neri, P. M., Aaron, S., Hickman, T. T., Maloney, F. L., Solomon, D. A., . . . Zuccotti, G. (2018). Development and evaluation of a novel user interface for reviewing clinical microbiology results. Journal of the American Medical Informatics Association : JAMIA, 25(8), 1064-1068. doi:10.1093/jamia/ocy014