Measuring Efficiency during Usability Testing

Recently, most of my work has been developing enterprise software and web applications. Because I'm building applications that employees spend their whole workday using, productivity and efficiency matters. This information can be uncovered during usability testing. 

The simplest way to capture the amount of effort during usability testing is to keep track of the actions or steps necessary to complete a task, usually by counting page views or clicks. Whichever you count should be meaningful and easily countable, either by an automated tool or video playback. 

There are two ways to examine the data - comparison to another system, and comparing the average users' performance to the optimal performance.

Compare One App to Another

Use this when you're comparing how many steps it took in the new application vs. the old application. Here, you'll compare the "optimal paths" of both systems and see which one required fewer steps. This doesn't require usability test participants and can be gathered at any time. It can be helpful to present this information in conjunction with a comparison time study, as it may become obvious that App A was faster than App B because it had fewer page views.

Compare the Users' Average Path to the Optimal Path

To do this, you'll compare the average click count or page views per task of all of the users in your usability study to the optimal path for the system. The optimal path should be the expected "best" path for the task. 

More than simply reporting efficiency, comparing average performance to optimal performance can uncover usability issues. For example, is there a pattern of users deviating from the "optimal path" scenario in a specific spot? Was part of the process unaccounted for in the design, or could the application benefit from more informed design choices?

Here's the process I use to calculate efficiency against the optimal path benchmark. 

  1. Count the clicks or page views for the optimal path.

  2. Count the clicks or page views for a task for each user.

  3. Exclude failed tasks.

  4. Take the average of the users' values (Excel: =AVERAGE or Data > Data Analysis* > Descriptive Statistics).

  5. Calculate the confidence interval of the users' values (Excel: Data > Data Analysis* > Descriptive Statistics).

  6. Compare to the optimal path benchmark and draw conclusions.

*Excel for Mac does not include the Data Analysis package. I use StatPlus instead. 

Reference

Measuring the User Experience by Tom Tullis and Bill Albert ©2008

Read more posts about usability testing.