Establishing a UXR Culture and Infrastructure

Challenge:

How do I established an infrastructure and subsequent culture for UXR in an organization where they currently do not exist? Further, how do I then continually evolve that infrastructure and culture once they are established

Being a UX practitioner in EdTech comes with its challenges. Not only are you responsible for designing products used for the facilitation, demonstration, and assessment of learning in schools around the globe you more than likely work for an organization that doesn’t place a strong emphasis on UX culture in the first place. For more info on this topic, please see [Insert]. Suffice it to say, figuring out how to establish a UXR focused infrastructure and culture where they are amiss can be a daunting task. I mean, where do you start? Well, let me introduce to you my 5-point framework for building a UXR infrastructure and culture.


5-Point Framework


“UX Researcher’s Fallacy: My job is to learn about end-users. Truth: My job is to help my team learn about end-users.”

I came across this quote back in 2011 and it fundamentally changed my philosophy and subsequent approach to UXR. Instead of seeing myself as some kind of middleman translator or go-to guru on user’s needs I began to see myself as an engineer responsible for constructing a system that brought the end-user closer to my team. A window, if you will, through which team members could observe and learn about their end-users. Further, for this system to be truly effective, it must be structured in a way that was relatable  – meaning, it had to be simple enough that team members with no research experience could participate and build their own research skills.

 

 

 

Much like oil, gold, diamonds, etc. I see data as something that must be extracted and processed before becoming anything useful. Therefore, to have useful data, it is important to focus on the overall infrastructure responsible for bringing said data to fruition. The framework breaks this infrastructure down into 5 areas of focus.

1. Collect

Garbage in, garbage out.

By focusing on the mechanisms used to collect data (i.e. methods and tools), one can insure that the data entering the system is of the highest quality and collected in a way that makes processing easier.

2. Aggregate

Prepping for analysis.

By focusing on the mechanisms used to get data into a centralized location (i.e. spreadsheets, video grids, etc.), one can insure that collected data can be analyzed more efficiently. Further, because some organizations have multiple teams dedicated to collecting data, it’s important to focus on how all those data streams get aggregated together.

3. Analyze

Extracting meaning.

By focusing on the mechanism, here, really techniques,  used to extract meaning out of aggregated data (i.e. qualitative data coding, task analysis, etc.), one can insure that findings from research are transformed into a medium suitable to a number of different reporting formats.

4. Report

Making findings digestible.

By focusing on the mechanisms, here, really formats, used to present findings (i.e. videos, slides, executive summaries, etc.), one can insure that any reported information is consumable no matter the audience.

5. Track

Action and transparency.

By focusing on the mechanisms used to track where findings came from and if they’ve been addressed, one can insure that processed data result in tangible action (i.e. development, redesigns, etc.) that is transparent to the larger organization.

Analyzing Collaboration

Challenge:

How do I analyze field test, usability tests, and observations where collaborative interactions occurred?

The K-12 classroom is a complex environment where a variety of scenarios consisting of different student-device configurations and human-computer interactions occur. One particular scenario that I find quite fascinating is what I refer to as 1:1 Small Group Collaborations. Here, groups of students within a class will use their individual devices (1:1) to work together collaboratively to achieve a shared objective.

What’s important to keep in mind in these collaborative scenarios is that the actions of one student can affect the experience of his or her co-collaborators. Further, when issues arise, the students involved, who may or may not be sitting on the opposite sides of the room – or the same room for that matter, most certainly will not be able to make the connection between the cause and effect – much less know how to report it. Therefore, EdTech companies that provide collaborative tools must do their due diligence to field test their products to make sure their design directions adhere to users’ expectations for collaborative work and that individual actions do not result in unforeseen bugs that affect the larger group – because dealing with that is the last thing an educator needs.

In this post, I will provide an overview of how to analyze such collaborative interactions in order to identify usability issues and bugs. To do so, I will use a simplified example of a real-life study I conducted that focused on the classroom use of a digital poster creation tool. Before I get into the meat and potatoes, let’s set up a little bit of context.

Let’s say we have an educator who is introducing the digital poster creation tool of focus to her 8th grade class. She intends on walking them through the initial setup of a collaborative document and then letting the five or six small groups work autonomously.

In order to study the on boarding process and otherwise field test the tool, we use a multi-pov camera system to record the classroom as a whole as well as the individual device interactions of one of the small groups. To keep it simple for this example, this group consists of three students. So, to recap, we’re recording four POVs: the classroom and three student devices.

After collection, we aggregate the recorded files into four video grids for analysis. Here, each grid focuses on one of the four recorded POVs while playing the remaining three in sync for context. For more info on video grids, please see INSERT. Next, we set up a spreadsheet where we will document our observations from the video grids. This spreadsheet should consist of at least three columns: one for our observed [inter]actions ( NOTE ), one for the corresponding POV ( P# ), and one for the timestamp in which the [inter]action occurred.

We bring up the Room Camera (RC) video grid first. While watching this video grid we’ll only focus on what the teacher says and does. It’s important to not get too caught up in what’s going on in the other grids – each will get their turn in the analysis. Here’s what we extract:

  • Teacher instructs students to create new document
  • Teacher instructs students to invite collaborators
  • Teacher instructs students to each add new page to document
  • Teacher instructs students to add paragraph component to their page
  • Teacher responds to students’ complaints that their work disappeared

In summation, it appears that the teacher was instructing the students on how to set up a collaborative document when she was interrupted by a possible issue experienced by the students.

To gain more insight we move on to the next video grid focusing on the first student participant, P1. Here we extract the following:

  • Student creates new document
  • Student invites collaborators
  • Student adds new page
  • Student adds paragraph component
  • System deletes paragraph component while student enters text

In summation, this student seems to be the one in charge of initiating the collaborative document – probably tapped by the teacher before the recordings began. However, while following the teacher’s instructions to add a paragraph, the student experienced a rather catastrophic issue – work just disappeared. In the actual study this example is based, the student clicked around for several minutes looking for an “undo” button while muttering some choice phrases under their breath.

To hopefully gain more insight into the issue we review the next video grid, P2. Here we extract the following:

  • Student receives invite and opens document
  • Student adds new page
  • Student adds paragraph component
  • Student adds text to paragraph component

In summation, P2 received the invite from P1 and followed the teacher’s instructions but didn’t seem to experience the same issue that P1 did.

Alright, one more video grid to review. We extract the following from P3:

  • Student receives invite and opens document
  • Student adds new page
  • Student adds paragraph component
  • System deletes paragraph component while student enters text

In summation, P3 received the invite from P1 and followed the teacher’s instructions but unlike P2 this student experienced the same issue as P1. So what gives?

Here’s where our coding format / spreadsheet structure come into play.

Because we timestamped everything, we can sort all our recorded actions for RC, P1, P2, and P3 into chronological order for a step-by-step transcript – which puts everything into a more holistic perspective. Remember, these students are working on a collaborative document – meaning one student’s actions can affect the experience of his or her co-collaborators. What this analysis reveals (and revealed in the real-life study) is a show-stopper of a bug. P2 took a little longer to respond to the teacher’s instructions to add a new page – and did so while the other collaborators were in the middle of entering text into the document. The addition of the new page caused all works-in-progress to be lost.