SIM started out as our capstone project at the HCDE Masters program. Fueled by our mutual passion and fascination with robots, Sahil, John, Dorothy and I wanted to take a human-centered approach to designing and building social robots.
The idea to build SIM came from our initial research and observation. There are amazing strides being made in the field of social robotics in terms of technology, but we found that interacting with robots still felt unnatural and forced. We wanted to take the Human-Centered approach that is so key to building digital experiences today and apply it to the world of social robotics.
Personally, this was an awesome opportunity for me to bring together the three things that I'm passionate about - Animation, UX Design, and Robotics.
Sep 2016 - Ongoing
Interaction Design, Prototyping, Visual Design, Character Design, Usability Testing, Supporting UX Research, Supporting Development
Using an iterative, data-driven approach, we built a prototype social robot with basic emotional intelligence. We developed an Intelligence Model that would define the personality and reaction of the robot when it interacts with users. Our tests revealed that people felt more connected to a robot with social intelligence, and even personified it, tried talking to it like it was a pet, and expected a more human response. The project was very well received and won multiple awards at the University of Washington.
Our first inspiration was 'Project EMAR', a robotics project run by professors at the University of Washington to create a robot that can collect survey data in context from users. The idea of robots integrating into society was exciting, but the biggest obstacle we noticed was that people who were exposed to EMAR had preset expectations of how to interact with it - likely because of its humanoid form.
Other studies in Human-Robot Interaction also suggest that accounting for simple social aspects in the design of robots improves our interactions with them significantly. Robots that are capable of displaying social cues and nonverbal behaviors during collaborative tasks attribute to improved task performance, higher recall accuracy, error prevention, and faster task-completion times.
It was clear from our research that people had social expectations from interactive robots. If robots are to become a part of our everyday lives, it isn't enough if they help us physically. They need to provide mental and emotional assistance as well. This thought was the spark behind our research question:
We had done the groundwork, were super inspired, and had our research question and hypotheses. Now we needed a way to test them. We built SIM - our prototype of a socially intelligent robot. SIM can engage in affective interchanges with people by understanding and expressing emotions. It is governed by 'The Social Interaction Model'.
Built on theories of human emotion from psychology, behavioral sciences and human-robot interaction, the SIM table is a map of situations, emotions, behaviors, and functions. The SIM table defines how SIM would react to an emotional stimulus in a given situation, and how that emotional response would affect the dynamics of that interaction.
Knowing the emotions that SIM needed to express beforehand made it easier to work out the physical design and technical details to build our prototype. For the facial design, we wanted to keep things as simple as possible. I designed SIM's face to be minimal, but incredibly versatile, so SIM's expressions are exaggerated and easily communicated.
For the body, we used three motors - two for the head and one for the torso. This gave us enough flexibility to design expressive movements. We mounted an android powered phone for SIM's head, giving us a blank canvas to design and display the face on.
It was time to put SIM to the ultimate test! We designed a usability study focused on building trust between a user and the robot. We tested two different versions of SIM with each user - one with social intelligence and one without. We ran the test with 10 people, randomizing the order in which they interacted with the robots. We recruited users who ranged from technically savvy to those who were almost technically stunted. We also spread our users out across the age spectrum and across genders. Our goal was to see if people trusted the socially intelligent model more than the other model.
We had users team up with SIM to make decisions on fairly ambiguous questions, like picking the dominant color in a picture. We repeated this task multiple times, and had SIM agree or disagree with the user on each round. We observed the interactions for trends in the number of times the users would change their answers based on SIM's suggestions, and how this changed over time and between versions.
We found that people personified the socially intelligent robot, using adjectives like fun, cute, friendly, happy, helpful, and thoughtful to describe it. We also found that people trusted the socially intelligent version a little more, and tried to engage in conversation and ask it for reasons behind its suggestions.
The study showed that social intelligence in robots is a step in the right direction. With this prototype and study, we only tested one aspect - trust. This is only the first step towards building intuitive and natural interactions between humans and robots. Given more time and resources, our goal is to bump up SIM's fidelity, designing a shell and move the facial expressions from a video model to a dynamic one. One of the major challenges that we faced as a team exploring this space was that there are no targeted tools for designing and prototyping robotics. We hope to create a platform that will encourage people to explore the social robotics space.
In the future we also hope to create modular kits for robotics enthusiasts to get started building social robots, and open-source our code so the robotics community can build on SIM's emotion engine. We're excited to see what doors SIM unlocks!