Screenshot 2018-08-05 13.37.13.png

Emotional Communication through Connected Devices

Emotional Communication through Connected Devices

Investigating the communication of emotion through tangible connected objects (the Internet of Things)

The aim: Show that the communication of emotions is a viable application for Internet of Things (IoT) devices. The challenge: Using a novel methodology and creating an experimental prototype to draw conclusions about the remote communication of emotions. Deliverables: 10,000 word dissertation that motivates, explains, analyses and discusses the usage of IoT in the remote communication of emotions. [Note: this project achieved a grade of distinction (80%) from UCL]

In the film adaptation of Harry Potter and the Chamber of Secrets, Ron Weasley receives a ‘howler’ from his mother after trashing the family car. The Howler manifests itself in the movie as an angry floating letter, which yells its message to Ron in his mother’s voice. The Howler is coloured red, moves in an erratic, jerky motion, and eventually tears itself apart.

As the final part of my Masters course, I worked on an empirical research project that takes the idea of a howler out of Hogwarts, and investigates its viability in today’s world. In scientific terms, the remote communication of emotions through IoT. The research consisted of four stages: 1) Secondary research to motivate and inform experimental design, 2) Creating a prototype for participant interaction, 3) Conducting the experiment, and 4) Analyzing and interpreting the results.  

Pitch presentation. Used in a one-minute presentation to the MSc cohort.

Pitch presentation. Used in a one-minute presentation to the MSc cohort.

Prototype - Surprise. Short clip of the experimental prototype communicating the emotion of surprise.

Prototype - Surprise. Short clip of the experimental prototype communicating the emotion of surprise.


Secondary Research

I conducted a thorough literature review to understand the research space in IoT devices and their applications. The domain areas I delved into included:

1) Tangible Autonomous Interfaces (TAIs) -  The main research area of the study; IoT objects that approach a life-like level of engagement with users.

2) Remote Communication through IoT - The main application of the technology I investigated; the prototyping of objects to facilitate engaging communication. 

3) Gestures as a way of communicating emotion - The methodology I used to test the viability of product movement as a medium for communication. This directly informed the experimental design of gesture elicitation: where two studies are conducted to elicit participant communication (i.e. gestures) and to interpret communication, as mediated by the IoT device (i.e. motion patterns). 

Literature review. Sample of some of the research papers referenced in motivating the project.

Literature review. Sample of some of the research papers referenced in motivating the project.

 

Prototype Design

Requirements: The prototype needed to be capable of autonomous motion. Experiment participants needed to be able to create gestures AND interpret the prototype's motion. 

Prototype choice - Why a Sphero: The Sphero is a commercially available toy, a small remote controlled ball, that can be programmed to perform autonomous actions. The device was also chosen as it included onboard sensors for recording motion. 

Delimiting input and output space: In theory, the Sphero can move anywhere. To ensure consistent and reliable measurement of movement, I needed to limit the motion space of the Sphero - within a 35cm by 24cm box/ board.

Physical limitations: Sphero movement is accomplished by controlling the asymmetrical weighting of the device using motors. When idle, this means the Sphero handles like a half-filled water bottle. Due to this, participants found it difficult to directly manipulate the device using their hands; the Sphero would not roll in a natural or predictable manner. This created unreliability in measurement. I solved this issue by 3D printing a holder for participants. The holder allows the device to roll around naturally in the participants' hand, while the act of 'grabbing' the device prevents unwanted movement of the Sphero. 

Recording and coding motion: The initial idea was to record participant's gestures using the Sphero's on-board sensors, which can then be immediately transformed into code for playback. Various attempts were made at making this work, including live conversion of the data through javascript and proportional-integral-derivative controllers. However, it became evident that the weighting of the Sphero itself was creating distortions in recording movement when the motors are inactive. Ultimately, I used a wizard-of-oz approach by recording videos of participant gestures and manually re-coding movement for subsequent 'playback'. 

Cross-section of Sphero. Movement is accomplished by the motors shifting the base weight of the device.

Cross-section of Sphero. Movement is accomplished by the motors shifting the base weight of the device.

Sphero holder. The holder invites participants to manipulate the device by 'grabbing' the device with one hand.

Sphero holder. The holder invites participants to manipulate the device by 'grabbing' the device with one hand.

Sphero input path [left] vs sensor-recorded path [right]. Due to the discrepancy between the input and recorded data, I modified the initial plans for recording and coding motion.

Sphero input path [left] vs sensor-recorded path [right]. Due to the discrepancy between the input and recorded data, I modified the initial plans for recording and coding motion.

 

Eliciting Gestures and Designing Sphero Movement

Before starting the experiment, I ran pilot tests with 6 participants. The pilots tests helped to validate and refine the experiment and prototype design. Participant feedback over each iteration shortened the length of the experiment from 30 minutes to 20 minutes, reduced participant confusion, and improved experimental reliability and validity.  

17 participants created gestures for seven emotions - anger, disgust, fear, interest, joy, sadness, and surprise. 

Participants were video recorded for each emotion, and were invited to verbalise their thoughts (Think Aloud protocol) during elicitation. 

The video recordings of the gestures were analyzed, with gestures broken down into individual features. Consensus data on individual features for each emotion was used in designing motion patterns for the Sphero to execute. 

Describing gestures for analysis and design. Each gesture from each participant was broken down into individual features, such as type of movement, speed and area covered.

Describing gestures for analysis and design. Each gesture from each participant was broken down into individual features, such as type of movement, speed and area covered.

Proposed motion patterns for the Sphero to execute. Note that when consensus could not be achieved, multiple patterns were designed for interpretation.

Proposed motion patterns for the Sphero to execute. Note that when consensus could not be achieved, multiple patterns were designed for interpretation.

 

Interpreting Sphero Movement

15 participants interpreted video recordings of the Sphero executing each motion pattern autonomously. 

Participants would have to choose which emotion was being executed by the Sphero, as well as indicate their confidence in their interpretation. 

Communication success was mixed, with only Anger, Sadness, Interest and Joy interpreted correctly at a rate higher than chance (14% - one out of seven emotions).  

The results suggested that remote communication of emotions through IoT devices is viable, but needs further study for reliable usage. 

Accuracy scores for motion pattern interpretation. Only the patterns for Anger, Sadness, Interest and Joy were correctly interpreted at a rate higher than chance.

Accuracy scores for motion pattern interpretation. Only the patterns for Anger, Sadness, Interest and Joy were correctly interpreted at a rate higher than chance.

 

Limitations and Personal Takeaways

The work helped to pave the way for future work in emotional communication as mediated by IoT. 

However, my inexperience in coding, particularly in javascript, proved to be a limitation. As a result, I could not conduct more quantitative analysis of sensor data from gesture elicitation - which would have resulted in more fine-grained and objective analysis of participant gestures. 

Given more time, I would have spent more time on designing and coding the prototype to solve the issue of the asymmetrical weighting of the device. 

Additionally, I could have conducted a deeper qualitative study at participants' think aloud data during elicitation and interpretation, thus shedding more light on the mental models employed when communicating through the limited bandwidth of Sphero's motion.