The Mobius Camera Hack

Challenge:

How do I document the many human-computer and human-human interactions that occur during a classroom session? How do I do this as non-invasively and cost effectively as possible?

Post In Progress

 

 

Cut the warranty sticker. Note, this voids the warranty.

Remove screws separating the top and bottom of casing.

 

 

 

 

Remove the tension screw in the lens component. This will allow you to adjust the focus of the lens.

 

 

 

 

 

Establishing a UXR Culture and Infrastructure

Challenge:

How do I established an infrastructure and subsequent culture for UXR in an organization where they currently do not exist? Further, how do I then continually evolve that infrastructure and culture once they are established

Being a UX practitioner in EdTech comes with its challenges. Not only are you responsible for designing products used for the facilitation, demonstration, and assessment of learning in schools around the globe you more than likely work for an organization that doesn’t place a strong emphasis on UX culture in the first place. For more info on this topic, please see [Insert]. Suffice it to say, figuring out how to establish a UXR focused infrastructure and culture where they are amiss can be a daunting task. I mean, where do you start? Well, let me introduce to you my 5-point framework for building a UXR infrastructure and culture.


5-Point Framework


“UX Researcher’s Fallacy: My job is to learn about end-users. Truth: My job is to help my team learn about end-users.”

I came across this quote back in 2011 and it fundamentally changed my philosophy and subsequent approach to UXR. Instead of seeing myself as some kind of middleman translator or go-to guru on user’s needs I began to see myself as an engineer responsible for constructing a system that brought the end-user closer to my team. A window, if you will, through which team members could observe and learn about their end-users. Further, for this system to be truly effective, it must be structured in a way that was relatable  – meaning, it had to be simple enough that team members with no research experience could participate and build their own research skills.

 

 

 

Much like oil, gold, diamonds, etc. I see data as something that must be extracted and processed before becoming anything useful. Therefore, to have useful data, it is important to focus on the overall infrastructure responsible for bringing said data to fruition. The framework breaks this infrastructure down into 5 areas of focus.

1. Collect

Garbage in, garbage out.

By focusing on the mechanisms used to collect data (i.e. methods and tools), one can insure that the data entering the system is of the highest quality and collected in a way that makes processing easier.

2. Aggregate

Prepping for analysis.

By focusing on the mechanisms used to get data into a centralized location (i.e. spreadsheets, video grids, etc.), one can insure that collected data can be analyzed more efficiently. Further, because some organizations have multiple teams dedicated to collecting data, it’s important to focus on how all those data streams get aggregated together.

3. Analyze

Extracting meaning.

By focusing on the mechanism, here, really techniques,  used to extract meaning out of aggregated data (i.e. qualitative data coding, task analysis, etc.), one can insure that findings from research are transformed into a medium suitable to a number of different reporting formats.

4. Report

Making findings digestible.

By focusing on the mechanisms, here, really formats, used to present findings (i.e. videos, slides, executive summaries, etc.), one can insure that any reported information is consumable no matter the audience.

5. Track

Action and transparency.

By focusing on the mechanisms used to track where findings came from and if they’ve been addressed, one can insure that processed data result in tangible action (i.e. development, redesigns, etc.) that is transparent to the larger organization.

The Index Card Hack

Challenge:

How do I get a classroom of 20+ students up and running with the materials I'm testing while also capturing each student's demographic information, feedback, survey responses, etc. all within a 30 minute class period?

A design direction should always be tested in the environment in which it is intended to be used – and when it comes to educational technology, that environment is quite often a K-12 classroom. Now, I’m not sure the last time any of you were in a K-12 classroom but getting 20+ students on the same page with anything is like herding cats into a wet bag – all it takes is one student getting off track to bring the whole operation to a standstill. Suffice it to say, one of the challenges I faced when working in this industry is figuring out how to get a classroom of students up and running with the materials being tested while also collecting each student’s demographic info, written feedback, post-test survey responses, etc. all before the bell rings. Luckily, over the years I’ve learned how to do this quite effiencetly using a method I’ve dubbed the Index Card Hack.


Schematic: The Index Card Hack

  1. Student types in short URL to access the demographics form
  2. Student inputs his/her unique ID# and fills out form
  3. Student submits demographics form and is redirected to the looping feedback form
  4. Student’s ID# is automatically filled into the ID field in feedback form
  5. Student’s ID# and demographic information creates a new record in the demographics tab of the spreadsheet
  6. Student types in short URL to access the design direction (open in new tab)
  7. Student inputs feedback while interacting with the design direction
  8. Student submits feedback form and the form redirects to itself
  9. Student’s ID# is automatically filled into the ID field when the form redirects
  10. Student’s ID# and feedback creates a new record in the feedback tab of the spreadsheet
  11. Student continues to submit feedback as needed while interacting with the design direction

 

 


my solution:

Hand each participant an index card with a unique ID# on the front and a list of URLs on the back. Here, each index card is kind of like a key the students use to access the study’s forms and design directions. However, running in the background is a system that leverages the data-passing functionality found in Wufoo (online forms) and Zapier (web app connection and task automation) to collect the data entered into those forms and automatically aggregate it into a centralized location – typically a Google Spreadsheet. Further, by having the students use the unique ID# as their personal identifier, all their submitted data (i.e. their demographic info, feedback, survey responses, etc.) gets tied together on the backend without the need for any personally identifying information.

This is how the index card hack plays out in real-life:

I walk into a classroom with a stack of index cards that each have a unique, two-digit, alpha-numeric ID# on the front and a list of URLs on the back. Here, the same list of URLs is on each card. Further, I will have used tinyurl.com’s custom alias feature to convert what would normally be lengthy URLs into ones that are easier for the students to read and type.

I hand each student an index card and instruct them to enter the 1st URL [demographics] into their browser. This takes the students to a demographics form which asks for their ID#, age, gender, class period, and anything else I want to collect. No personally identifying info though – that’s the point of the unique ID#.

I instruct the students to fill out the form and to use the ID# on the front of their card for the field labeled ID# in the form.

I instruct the students to submit the form. This (1) sends their demographic info to a spreadsheet I setup during preparation to receive all submitted data and (2) redirects the students to a second form designed to collect open-ended feedback. Here, the students’ unique ID#s are already filled in at the top. At this point, I instruct the students to stop what they’re doing.

I instruct the students to open a new tab in their browser and to enter in the 2nd URL [design direction]. This takes the students to the material with which they will be interacting. Note, if the material requires a password or some form of credentials to access, then that info will be on the index card as well.

I instruct the students to interact with the material in a manner based on the parameters of the study. This could be anything from completing specific tasks to a free explore to doing what the teacher tells them. I also instruct the students to use the feedback form in the first tab to submit any thoughts, feedback, issues, etc. they have while interacting with the material – and to submit each entry separately as they come up. Two things happen when a student submits the form: (1) the feedback and the student’s ID# gets sent to the same linked spreadsheet and (2) the form resets itself but keeps the ID# filled in allowing for rolling submissions.

Now, you probably thinking “Whelp, this seems like a lot of work to setup…”. Yes, yes it is. However, I assure you the legwork pays off in spades as you’ll be able to collect exponentially more information in less time and with less effort. Plus, I’m going to walk you through how to do it – and once you get the hang of the basics, you’ll naturally see how to adjust this hack to work for your specific needs. We’ll start by setting up our forms.

Forms

There are many survey tools out there each with their own pros and cons. As a researcher, it’s important to know the capabilities of all the tools in your toolbox and how to leverage them to suit your needs. Wufoo is a survey tool that I believe provides some of the best features for hacking systems, like the one discussed in this post, together. To keep it simple, we’ll focus on a system that captures student demographic information then redirects to a looping open-ended feedback from. The feedback form will allow students to input feedback about their experience with the design direction simply by switching between tabs. As they encounter something, they can just switch to the tab, type in the feedback, submit (which refreshes the form), and then return to interacting with the design direction.

In Wufoo, we’ll want to create two forms: one we’ll label 1. Demographics and another we’ll label 2. Feedback.

In the 1. Demographics form we’ll add four fields:

  1. *A single line text field labeled ID#
  2. A number field labeled AGE
  3. A dropdown field labeled GENDER
  4. A multiple choice (single select) field labeled CLASS

*MAKE SURE TO ADD THE ID# FIELD FIRST!!!!

In the 2. Feedback form we’ll add two fields:

  1. *A single line text field labeled ID#
  2. A paragraph Text field labeled FEEDBACK

*MAKE SURE TO ADD THE ID# FIELD FIRST!!!!

Now, we’ll want to get the Permanent Link to the 2. Feedback form. It’s important to get the Permanent Link and NOT the Title Link. The Title Link is a link that uses the actual title of the form in the URL itself. Meaning, if you change the title of the form, the link you grabbed earlier will no longer work. Trust me, you don’t want to make this mistake when setting up a complex system – by using the Permanent Link you can set the system up and change the title all you want without breaking the system.

Go back into the Demographics from and open up Form Settings. Here, we’ll change the Confirmation Options to Redirect to Website. This website of course is the Feedback form link we just copied. We’ll paste it into the field but we’re not done yet. Add the following to the end of the URL:

def/Field1=[entry:Field1]

This is parameter passing – one of Wufoo’s nifty features. See, if one Wufoo form redirects to another Wufoo form, you can set up a parameter pass to autofill a field in the second form with data entered into a field in the first form. The parameter pass above says to take the data in Field1 of Demographics (ID#) and pass it to Field1 in Feedback (also ID#). This is why it was important to add the ID# fields first when creating the two forms – by adding them first Wufoo identifies them as Field1.

So why are we doing this? Well, two reasons:

  1. A better user experience for the participants – one less step
  2. We’ll be using the unique ID#s to tie each participant’s data together

Highlight the full permanent link including the parameter passing info you added, copy it, save the Demographics form and open up the Feedback form.

In the Feedback form, go to Form Settings, Confirmation Options, Redirect to Website, paste in the full permanent link with the parameter passing info, and save the form.

Now, test everything out by filling out the Demographics form like a student would. When you submit the Demographics form, you should be redirect to the Feedback form with the info in the ID# field autofilled. Further, when you submit the Feedback form, it should refresh but the ID# field should be autofilled with the same unique ID# you typed into the Demographics form. You now have a system to collect rolling feedback that doesn’t require the student to type in their ID# each time.

Now it’s time to set up our spreadsheet which will eventually receive all the responses submitted via the Demographics and Feedback forms.

Spreadsheet

Setting up our spreadsheet is relatively straight forward. We’ll need to create two tabs: DEMO and FEEDBACK. If you have a lot of spreadsheets in your account’s Drive, I suggest titling this spreadsheet with something that distinguishes it from the rest – you’ll need to pick it out of a lineup later on.

A word of advice: If you’re conducting a lot of studies – especially in parallel, hopefully you’ve come up with a naming/cataloguing system that allows you to keep track of a given study’s related files, folders, projects, etc. as they exist within the various tools you use.

In the DEMO tab create the following column titles:

  • TIMESTAMP
  • ID
  • AGE
  • GENDER
  • CLASS

In the FEEDBACK tab create the following column titles:

  • TIMESTAMP
  • ID
  • FEEDBACK

Alright, so at this point we’ve set up our collection mechanisms (i.e. Demographics and Feedback forms) and our corresponding receptacles (i.e. DEMO and FEEDBACK tabs in the spreadsheet). Now it’s time to set up our automation / aggregation mechanism.

Automation

 

 

 

 

 

 

 

 

 

 

Links

 

 

 

Index Cards

 

 

Analyzing Collaboration

Challenge:

How do I analyze field test, usability tests, and observations where collaborative interactions occurred?

The K-12 classroom is a complex environment where a variety of scenarios consisting of different student-device configurations and human-computer interactions occur. One particular scenario that I find quite fascinating is what I refer to as 1:1 Small Group Collaborations. Here, groups of students within a class will use their individual devices (1:1) to work together collaboratively to achieve a shared objective.

What’s important to keep in mind in these collaborative scenarios is that the actions of one student can affect the experience of his or her co-collaborators. Further, when issues arise, the students involved, who may or may not be sitting on the opposite sides of the room – or the same room for that matter, most certainly will not be able to make the connection between the cause and effect – much less know how to report it. Therefore, EdTech companies that provide collaborative tools must do their due diligence to field test their products to make sure their design directions adhere to users’ expectations for collaborative work and that individual actions do not result in unforeseen bugs that affect the larger group – because dealing with that is the last thing an educator needs.

In this post, I will provide an overview of how to analyze such collaborative interactions in order to identify usability issues and bugs. To do so, I will use a simplified example of a real-life study I conducted that focused on the classroom use of a digital poster creation tool. Before I get into the meat and potatoes, let’s set up a little bit of context.

Let’s say we have an educator who is introducing the digital poster creation tool of focus to her 8th grade class. She intends on walking them through the initial setup of a collaborative document and then letting the five or six small groups work autonomously.

In order to study the on boarding process and otherwise field test the tool, we use a multi-pov camera system to record the classroom as a whole as well as the individual device interactions of one of the small groups. To keep it simple for this example, this group consists of three students. So, to recap, we’re recording four POVs: the classroom and three student devices.

After collection, we aggregate the recorded files into four video grids for analysis. Here, each grid focuses on one of the four recorded POVs while playing the remaining three in sync for context. For more info on video grids, please see INSERT. Next, we set up a spreadsheet where we will document our observations from the video grids. This spreadsheet should consist of at least three columns: one for our observed [inter]actions ( NOTE ), one for the corresponding POV ( P# ), and one for the timestamp in which the [inter]action occurred.

We bring up the Room Camera (RC) video grid first. While watching this video grid we’ll only focus on what the teacher says and does. It’s important to not get too caught up in what’s going on in the other grids – each will get their turn in the analysis. Here’s what we extract:

  • Teacher instructs students to create new document
  • Teacher instructs students to invite collaborators
  • Teacher instructs students to each add new page to document
  • Teacher instructs students to add paragraph component to their page
  • Teacher responds to students’ complaints that their work disappeared

In summation, it appears that the teacher was instructing the students on how to set up a collaborative document when she was interrupted by a possible issue experienced by the students.

To gain more insight we move on to the next video grid focusing on the first student participant, P1. Here we extract the following:

  • Student creates new document
  • Student invites collaborators
  • Student adds new page
  • Student adds paragraph component
  • System deletes paragraph component while student enters text

In summation, this student seems to be the one in charge of initiating the collaborative document – probably tapped by the teacher before the recordings began. However, while following the teacher’s instructions to add a paragraph, the student experienced a rather catastrophic issue – work just disappeared. In the actual study this example is based, the student clicked around for several minutes looking for an “undo” button while muttering some choice phrases under their breath.

To hopefully gain more insight into the issue we review the next video grid, P2. Here we extract the following:

  • Student receives invite and opens document
  • Student adds new page
  • Student adds paragraph component
  • Student adds text to paragraph component

In summation, P2 received the invite from P1 and followed the teacher’s instructions but didn’t seem to experience the same issue that P1 did.

Alright, one more video grid to review. We extract the following from P3:

  • Student receives invite and opens document
  • Student adds new page
  • Student adds paragraph component
  • System deletes paragraph component while student enters text

In summation, P3 received the invite from P1 and followed the teacher’s instructions but unlike P2 this student experienced the same issue as P1. So what gives?

Here’s where our coding format / spreadsheet structure come into play.

Because we timestamped everything, we can sort all our recorded actions for RC, P1, P2, and P3 into chronological order for a step-by-step transcript – which puts everything into a more holistic perspective. Remember, these students are working on a collaborative document – meaning one student’s actions can affect the experience of his or her co-collaborators. What this analysis reveals (and revealed in the real-life study) is a show-stopper of a bug. P2 took a little longer to respond to the teacher’s instructions to add a new page – and did so while the other collaborators were in the middle of entering text into the document. The addition of the new page caused all works-in-progress to be lost.

Visual Communication

Challenge:

How do I communicate research findings, ideas, and abstract concepts in a manner that is easily digestible by my team and other stakeholders?

Personally, I am not a huge fan of bullet point reports. Of course they have their purpose and place in an organization – they’re useful as executive summaries for high-level management. However, I have found that when communicating with the creative folk on your team (i.e. the visual designers, information architects, and even developers) one must communicate in a language more suitable to the way they think. Below are examples of video overviews I have created to communicate research findings as well as thoughts, suggestions, etc.





 

 

Collaborative Usability Tests

Challenge:

How do I conduct usability tests on products intended for the complex, multi-user, collaborative environment that is the modern-day, blended learning classroom?

Usability tests are the task-based assessment of a design direction. Here, participants execute predefined tasks that require them to interact with various elements, features, etc. of said design direction. Performance can be assessed in a variety of ways depending on what it is you are trying to find out or measure. Now, in K-12 education, the way users are meant to interact with a design direction can get a little complex as products are not only intended to be used by a single person working individually but also by 20+ people working in parallel or in conjunction to achieve a common objective. Which is why I break usability tests down into 3 categories based on the nature of what is to be tested:

Individual
Group
Collaborative


Individual

Moderated

Example: Student | Dynamic Geometry Tools

Moderated 1-on-1s are excellent when your experimental design requires a great deal of control and when what you are trying to measure is quantitative in nature (such as time on task). Here, each participant executes the same predefined tasks on their own. Results are then used to formulate an unbiased average.

Unmoderated (Remote)

Example: Teacher | Classrooms Prototype

Unmoderated, remote usability tests allow participants to complete specified tasks on their own time in a setting convenient to them. Meaning, you designate the tasks and they complete them someplace else. These types of usability tests allow data to be collected without the need for a researcher to be present. These are great when you’re testing with educators as they are some of the most time-strapped people on the planet.

Unfortunately, I have found that many of the online tools for remote usability testing fall short in two areas: (1) getting participants up and running and (2) providing useful outputs/artifacts to the researcher. For example, if the steps just to get into the test are too complicated, participants will say f-it and drop off. Further, many online tools will try to sell you their ability to analyze the tests for you. Sorry, there is no one-size-fits-all analysis tool as interfaces and the objectives they allow users to accomplish vary in complexity. User behavior on its own is extremely complex and, if you consider yourself a good researcher, you should be doing the analysis yourself. Of course, this requires the tool to provide you with a recording of each participant’s session / interactions. Seems like a no-brainer but you’d be surprised how many tools out there do not provide this crucial artifact.


Group

Example: Principals | Usage Reporting Dashboards

Group Usability Tests are great when you’re not as concerned with quantifying individual task performance (i.e. time on task) but rather getting a grasp on a design direction’s general learnability. I have found that when you sit 4-5 students or educators together at a table (each with their own device) and present them with loosely defined tasks to complete on their own, they naturally gravitate towards helping each other complete the tasks. The benefit here being the highly qualitative conversations and interactions that occur between the participants.


Collaborative


Example: Students | Studio Collaboration
Collaborative Usability Tests (my favorite) are great when the design direction you are testing is geared for collaboration and/or requires multiple end-users to complete a task. In K-12 education, collaboration is one of the coveted 21st Century Skills – so having a product that does it right is important.

Field Tests & Observations

Observations are crucial to user research. Observations allow end-user behavior to be captured as it exists naturally in real-world form. Without observations, understanding of end-user behavior would be based entirely on speculation, hearsay, self-reporting, and other biased translations. However, capturing end-user behavior naturally can be tricky.

As researchers we want to capture quality data without being too invasive or perturbing the environment in which we are observing. Ideally, we must be omnipresent flies on the wall. The traditional field notes and pencil approach to observations (researcher observes over end-user’s shoulder and scribbles notes) is burdensome as it inserts the researcher too closely into the equation and, more importantly, limits the amount of data that can be collected to what can be seen, written, and remembered from a single, biased perspective. Which is why I encourage researchers to come up with clever ways to 1. remove themselves from the equation and 2. capture higher quality data from multiple, un-biased perspectives. This is extremely important especially when what is being observed is a complex, multi-end-user environment such as a classroom.

Remember, you can only observe something once. If you capture it correctly, you can review and analyze as many times as you like.

Multi-POV

Example: 10th Grade Geometry Classroom | Math Techbook

Single-POV

Example: Educator Searching for Content / Media | Search