14 September 2022
Your efforts could be backfiring and making your site harder to use, and you’d never know. Measurements are also a great way to justify spending money on improving the usability of your website to investors, partners and stakeholders. By measuring how usable your site is and setting targets for improvement, you can calculate increases in efficiency, increased purchases/opt-ins/registrations, and reductions in costs.
The most important thing to keep in mind when you’re measuring usability is to make sure you have a base rate to compare against. If you don’t know how your site performs before you start making changes, it’s impossible to know how you’re doing. You could be increasing the success of your site or product, but by how much? Are you making more money than you’re spending?
You have a wide range of measurements to choose from. For instance, ISO standard 9241 deals with different aspects of working with computers, with a number of parts related to usability. One such part, ISO 9241-11, defines usability as:
“The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use”
This is a good general summary of what usability is. Another good way of describing it, especially in terms of interfaces, is 9241-110. It specifies that the dialogue between user and systems (essentially the interface: controls that display information to the user, and accept input from them), should:
There are other areas of the standard that deal with usability, but these are good starting points.
There are many metrics you could consider when measuring usability, but here are some ideas:
Number of set tasks completed successfully first time
Number of errors per task
Average time taken to complete each task
Number of incorrect pages visited while trying to complete a task
Average rating of system in user surveys
Difference in number of positive and negative descriptors when asked to describe how a user found the system
Number of tasks that users set themselves and can complete successfully
Number of set tasks a user completes successfully on second and subsequent tests.
Number of attempts taken to complete a task successfully
Number of elements a user can successfully identify without prior system knowledge
Number of ‘trunk tests’ completed successfully (Krug, S., 2000)
Number of ‘5-second tests’ completed successfully (Perfetti, C., 2005)
Number of times the system can successfully identify and resolve user errors
Number of times users can successfully complete a task after making a mistake
Number of times users can successfully identify and recover from an error message provided by the system
Number of times users can successfully identify features of the interface without using a manual.