“If you don’t collect any metrics, you’re flying blind. If you collect and focus on too many, they may be obstructing your field of view.”
― Scott M. Graffius
If you are designing web applications or enterprise software you might find the following UX metrics useful:
One of the most common metrics, it simply shows the percentage of correctly completed tasks by users.
The task requires a clearly defined goal, such as completing a registration form, publishing a post, etc to be able to measure the success rate. So before collecting data, it is important to define what constitutes success.
This metric represents the time (in minutes and seconds) a user needs to complete a task successfully.
The most used variations of this measurement are:
• average task completion time
• average task failure time
• overall average task time (for both failure and success)
Feature usage is a set of metrics that tracks the user behavior towards a particular product feature. The most common feature usage measures are:
• total number of times people are using the feature
• number of unique users who are using the feature
• percentage of your total active users who are using the feature
• average number of times per day users are using the feature
Tip: observe what users do before they engage with the feature, what is the context of their usage. Users might not use a feature because it's inaccessible or due to other reasons.
Results should be presented in a chart over a period of time.
When observing users in a usability test, record every time an error occurs, even if the same error occurs multuple times for the same user.
Errors, unlike task completion metrics, can occur more than once per user and per task. This can complicate the analysis since you cannot easily compute a ratio as with other task metrics. For example if a user commited 5 errors for 1 task you cannot just divide 5/1.
One way to do this measurement is to treat errors as binary data:
• 1 when user committed at least 1 error
• 0 when user committed no errors.
This approach leaves out a part of the information, but for tasks and applications without too many errors, this may be sufficient.
This is a great metric to analyze how product changes are experienced by users. For example if the number of support tickets increases after a release you can use this information to further analyze the issues.
Linking support tickets to features of your product can help you identify issues without any additional efforts. Always keep an eye on the Support Backlog.