Collaborative Usability Tests

Challenge:

How do I conduct usability tests on products intended for the complex, multi-user, collaborative environment that is the modern-day, blended learning classroom?

Usability tests are the task-based assessment of a design direction. Here, participants execute predefined tasks that require them to interact with various elements, features, etc. of said design direction. Performance can be assessed in a variety of ways depending on what it is you are trying to find out or measure. Now, in K-12 education, the way users are meant to interact with a design direction can get a little complex as products are not only intended to be used by a single person working individually but also by 20+ people working in parallel or in conjunction to achieve a common objective. Which is why I break usability tests down into 3 categories based on the nature of what is to be tested:

Individual
Group
Collaborative


Individual

Moderated

Example: Student | Dynamic Geometry Tools

Moderated 1-on-1s are excellent when your experimental design requires a great deal of control and when what you are trying to measure is quantitative in nature (such as time on task). Here, each participant executes the same predefined tasks on their own. Results are then used to formulate an unbiased average.

Unmoderated (Remote)

Example: Teacher | Classrooms Prototype

Unmoderated, remote usability tests allow participants to complete specified tasks on their own time in a setting convenient to them. Meaning, you designate the tasks and they complete them someplace else. These types of usability tests allow data to be collected without the need for a researcher to be present. These are great when you’re testing with educators as they are some of the most time-strapped people on the planet.

Unfortunately, I have found that many of the online tools for remote usability testing fall short in two areas: (1) getting participants up and running and (2) providing useful outputs/artifacts to the researcher. For example, if the steps just to get into the test are too complicated, participants will say f-it and drop off. Further, many online tools will try to sell you their ability to analyze the tests for you. Sorry, there is no one-size-fits-all analysis tool as interfaces and the objectives they allow users to accomplish vary in complexity. User behavior on its own is extremely complex and, if you consider yourself a good researcher, you should be doing the analysis yourself. Of course, this requires the tool to provide you with a recording of each participant’s session / interactions. Seems like a no-brainer but you’d be surprised how many tools out there do not provide this crucial artifact.


Group

Example: Principals | Usage Reporting Dashboards

Group Usability Tests are great when you’re not as concerned with quantifying individual task performance (i.e. time on task) but rather getting a grasp on a design direction’s general learnability. I have found that when you sit 4-5 students or educators together at a table (each with their own device) and present them with loosely defined tasks to complete on their own, they naturally gravitate towards helping each other complete the tasks. The benefit here being the highly qualitative conversations and interactions that occur between the participants.


Collaborative


Example: Students | Studio Collaboration
Collaborative Usability Tests (my favorite) are great when the design direction you are testing is geared for collaboration and/or requires multiple end-users to complete a task. In K-12 education, collaboration is one of the coveted 21st Century Skills – so having a product that does it right is important.