In 2016, I began experimenting with emotion processing technology and built some prototypes of "emotion games," with the intention of making data collection for ASD easier to manage. I worked out of the Georgia Tech Ubicomp Lab designing an emotional processing study and built the supporting technology.
Looking at a person and asking– "are they happy?" is a luxury most of us take for granted. And, even though researchers and engineers alike have advanced computer vision to the point where we can accurately track facial expressions but correlating these facial metrics with actual behavioral characteristics is an incomplete practice. Other methods and apps have tried to fill in these gaps, but many fall short.
Duke University's app, Autism and Beyond, for example, mostly just looks at observational data of children in response to various stimuli. Other apps also do assess behavioral metrics but usually without consideration for changes in facial expression. While this is certainly a key into inspecting the intricacies of ASD or any emotional processing condition, Empa's goal is to look at distinct differences primed and non-primed emotional behavior.
Ideation & Rapid Prototyping
Empa's prototyping process started with an extremely basic game, essentially just taking an emoji and having the user replicate that emoji with their own facial expressions to gain a point.
After some tinkering, I created a prototype of this basic game as an iOS app written in Swift and Objective-C. It uses the streaming Affectiva API to process my facial reactions in realtime.
Initial working prototype
Then, after presenting my work– and by presenting I mean tapping on Dr. Gregory Abowd's shoulder and showing him the app– he pointed me to Dr. Rosa Arriaga, who is currently advising my work and has guided me through this process.
Experimental & Interaction Design
"What is the relationship between the viewer and the viewed?"
Here, the cycle of consuming content and producing a reaction forms the basis of how we explore the relationship between media and social interactions.
We realized that the primary component in discovering new insight into this issue was to take a closer look at studying this relationship over time. What questions could we ask then? With time-series analysis, we don't just have a simple overall response to analyze; e.g., "User A viewed the Video B. They had reaction C."
Emotion over time
Within the context of ASD, we can very easily track relative changes in emotion over time by simply measuring the slope of whatever facial expression is bebing mesaured at any given moment in time. In the graph below, we can see happiness being tracked over time for some user.
Science and discovery ultimately come back to being able to ask good questions, which is why the three "layers" of interaction in Empa manifest themselves as three variables and questions we want to analyze:
The what, the how, and the why.
On a basic level, we have touch and sight. Recording these interactions are comprised of simple questions like:
"Which button did they press?"
These kinds of questions offer relatively little insight into behavior, but allow us to answer the what of user actions.
These questions ultimately lead to go deeper into user actions, allowing us to analyze our users in two dimensions and measure the how of change:
"How did their choice patterns change?"
Finally, we can make our data 3D. We now have three variables/interactions to look at and gather insight from, and begin to answer the why of change.
"Why did their choice patterns change?"
Basic UX flows
Here are some sketches for basic treatment flows and the technical organization of the interface.
Users will be presented with an image that they have to judge by rating it on a slider from 😞 to 😊.
In order to observe differences in bias between users, we took our initial prototyping game interface (which had users imitate a given emoji on the screen), and combined it with our judgment interface.
Experimental Group 1: Users have to imitate a 😊 in order to continue
Experimental Group 2: Users have to imitate a 😞 in order to continue
Control Group: Users continue directly to judge images without priming.
From these sketches and thoughts about "layering" interaction, we designed a study with three primary goals in mind:
Decentralize data collection and make a mobile-first, distributed study.
One of the primary problems with data collection in science is not only how unregulated and unstandardized it is, but that collection is often limited to having test subjects come to a lab and collect various pieces of data with complicated and immobile equipment.
Empa lives inside of an app, and can be taken anywhere at any time. This places Empa into a category of scientific tools that makes it especially suited for field data collection.
Identify differences between primed and non-primed emotional judgement behavior.
Correlate changes in facial affect in reference to emotional judgment tasks, rather than strictly emotional observation tasks.
The key here is that, underneath every emotional judgment task, we use artificial neural networks to analyze the user's facial expressions in real time. So, instead of simply observing these changes while the user watches a video, we get to see the changes in behavior over the course of any judgment tasks and automatically reference their facial data at corresponding times.
The funny thing about science is that data collection apps and methods are often remarkably terrible to use and unstandardized. It was one of my goals to make the data collection process not only smooth for the test subject, but also for the researcher in question (me).
I thus centered the task flows of both the test subject and the researcher as pieces that fit into one another.
Current beta release:
Plans for the app are to improve app flexibility and gather test groups with other emotional processing conditions, namely PTSD, anxiety, and perhaps even see what effects different drugs have on the results of different experimental groups.