Usability Testing

Usability testing is the task-based assessment of a design direction. Here, participants execute predefined tasks requiring them to interact with various elements, features, etc. of a design direction. Performance can be assessed in a variety of ways depending on what it is you are trying to find out or measure.

1-on-1

Example: Student | Dynamic Geometry Tools

Moderated 1-on-1s are excellent when your experimental design requires a great deal of control and when what you are trying to measure is quantitative in nature (such as time on task). Here, each participant executes the same predefined tasks on their own. Results are then used to formulate an unbiased average.

Group


Example: Students | Studio Collaboration

Example: Principals | Usage Reporting Dashboards

Moderated Group Usability tests should be used when:

  1. The product / feature you are testing is collaborative in nature and/or requires multiple end-users to complete a task.
  2. You’re not as concerned with quantifying individual task performance as you are getting a grasp on the product / feature’s general learnability.

Number two is not as self-explanatory so I’ll provide more context. I have found that when you sit 4-5 students or educators together at a table (each with their own device) and present them with loosely defined tasks to complete on their own, they naturally gravitate towards helping each other complete the tasks. The benefit being the highly qualitative conversations and interactions that occur between the participants.

Remote

Example: Teacher | Classrooms Prototype

Remote, unmoderated usability tests allow participants to complete specified tasks on their own time in a setting convenient to them. Meaning, you designate the tasks and they complete them someplace else. These types of usability tests allow data to be collected without the need for a researcher to be present. However, I have found that many of the online tools for remote usability testing fall short in two areas: getting participants up and running and providing useful outputs/artifacts to the researcher. For example, if the steps just to get into the test are too complicated, participants will say f-it and drop off. Further, many online tools will try to sell you their ability to analyze the tests for you. Sorry, there is no one-size-fits-all analysis tool as interfaces and the objectives they allow users to accomplish vary in complexity. User behavior on its own is extremely complex and, if you consider yourself a good researcher, you should be doing the analysis yourself. Of course, this requires the tool to provide you with a recording of each participant’s session / interactions. Seems like a no-brainer but you’d be surprised how many tools out there do not provide this crucial artifact.