Sophie Kleber knows a thing or two about AI. The executive director of global product and innovation at Brooklyn-based digital agency Huge is a pioneer in designing with data. She creates future-forward user experiences, tackling design problems for clients ranging from IKEA and Comcast to Thomson Reuters, and continuously pushing the boundaries of ‘what’s possible’ to transform it into ‘what should be’.
Here, she explains how to design for emotional intelligence, lifting the lid on how empathy is becoming an important design element in artificial intelligence.
How do emotions and machine learning go together?
Sophie Kleber: Machine learning describes the process of a machine being able to learn and adjust its behaviour over time without being programmed. Emotions play a crucial role in human behaviour. Detecting, understanding and responding to emotion is one of the first things humans learn, and maybe the one thing we’ll never master. Emotional or affective computing might just be the puzzle piece that moves machines from being the best calculators to actually being intelligent.
What are the main challenges in getting computers to detect emotions?
SK: First, emotions are complicated. They are hardly ever pure in their expression, and they are often subconscious. Detecting these subconscious emotional currents and accurately categorising them is possible only with a combination of voice interpretation (accent, pitch, contour, tonality and timing of speech all give clues about a person’s emotional state) and the detection of facial micro-expressions. These are expressions that, if not always detectable for a human conversation partner, very strongly hint at how we really feel.
Second, it’s difficult to understand the context of emotions. Many experiments with emotional detection are currently conducted in a lab where the user’s undivided attention guarantees a correlation between trigger and emotional response. In reality, emotions linger or are delayed from the actual trigger, or they swell up unexpectedly from memory. Did I mention emotions were complicated?
What are your favourite examples of emotion AI or conversational UIs?
SK: I’m always looking for examples that make the world a bit more like the way I want computers and technology to work. It should work for us, and support us, like Big Hero 6. I like any examples that first and foremost expose emotions, but still enable me be in control and change them if I choose, like the Beyond Verbal technology or Affectiva’s technology.
I also like machines that monitor emotions to keep people safe, like car systems that detect emotions. A good example of an attempt at personality, which is the next frontier in AI design, is what happens when you ask Alexa to sing you a song.
What have you learned from building your own AI agent, Dakota, at Huge?
SK: We built Dakota to make our employees’ lives easier. We have a massive network of knowledge, and we operate with a self-management philosophy, so a chat UI was our attempt to let everyone access information and get assistance through the delivery mechanism most natural to us.
What we learned, aside from the fact that building true intelligence is hard, is that personality matters. Dakota is a cool, no attitude helper that’s a bit hipster but highly competent.
What are the first steps in designing for emotional intelligence?
SK: We developed a framework for getting started with emotional AI – basically you need to answer two questions before you get started:
1. What is the user’s desire for an emotional interaction? Is it the right time and place, the right interaction and so on?
2. Does your company have permission to play in the emotional space? Can you credibly claim emotional intelligence for the user’s benefit, and what’s the cost of being wrong?