Â鶹ԼÅÄ

What If Our Machines Had Emotions?

Take part in our user survey: how do you feel about Alexa?

Published: 31 January 2019

Â鶹ԼÅÄ R&D have launched a user survey to find out how people feel about the concept of machines being able to express emotions. We hope the survey will help us understand how people in the UK are 
using voice technologies in their daily lives and the kind of emotions that people experience while interacting with current voice assistants (e.g. Alexa, Google Assistant or Siri). We’re looking to gather responses from a broad range of voice device users and non-users.

(Update: The survey has now closed and we have removed the links to it from this post.)

Why Now?

In early 2018 it was reported that , with that figure to 12.6 million by 2019. With devices like Amazon Echo and Google Â鶹ԼÅÄ featuring high on the gift list for many of us last Christmas, that figure may well be considerably higher. So, many of us are now living with voice-driven devices, both in our homes and built into our mobile phones, but why is this important to the Â鶹ԼÅÄ?

A lot of Â鶹ԼÅÄ content is well suited for consumption over smart speakers (, Â鶹ԼÅÄ News, Â鶹ԼÅÄ Sport, Â鶹ԼÅÄ Weather, Â鶹ԼÅÄ Sounds) and  to improve the delivery, discovery and navigation of this content for voice-driven devices. As these devices become more widespread, more of us are using them to consume content and as a public service broadcaster, it’s our duty to ensure that we’re providing great interactive experiences and new forms of content that are optimised for the wide range of devices our audiences use.

Here at Â鶹ԼÅÄ R&D, we are always looking to the future and since 2016 our Talking with Machines project has been exploring new forms of voice-interactive content. This work led to the development of Orator, a set of tools for writing and playing interactive stories on voice devices, now used and extended by the Â鶹ԼÅÄ Voice team for products such as the CBeebies Alexa skill. Orator was originally created for R&D’s work on The Inspection Chamber - ask Alexa to ! Further work led to our recent release of interactive drama The Unfortunates, which you can try for yourself on Â鶹ԼÅÄ Taster or by .

As the Talking with Machines work continues (exciting stuff happening – watch this space!), a few of us started thinking about the sorts of experiences that could be possible if voice devices were able to express their own emotional states. We’ve seen moves in this direction with Amazon’s (SSML), which gives Alexa’s voice some expressive elements such as emphasis and intonation, however 95% of communication is non-verbal and it’s body language, facial expressions and non-verbal vocalisations (e.g. hesitation, laughter and intakes of breath) that speak volumes in human communication. This leads us to wonder about non-verbal expression for emotional machines.

We know what you’re thinking - machines don’t have emotions, right?

Well what if they did…

Imagine a smart speaker that gets embarrassed when it can’t find things; tired after a busy day; saddened by bad news; excited about visitors; or is feeling cold?

Perhaps emotionally expressive devices could provide more useful cues for users - if so, this opens up a wide range of possible applications. It could provide a supportive form of interaction to assist with isolation in the elderly and help with awareness around looking after ourselves and living well (“Alexa’s getting tired… oh my - look at the time! I best be off to bedâ€). It could assist with the frustration that we sometimes feel when using technology, for example if we can see that a machine is working hard to complete a task for us or if it fails to understand a command, it might encourage some patience on our part and stop us from wanting to throw our device out the window!

It could also be a useful indicator that a child has spent too much time with their tech - Apple addressed this with their tool but Janet Read, professor of child computer interaction at suggests that if computers were to behave more like humans:

“Maybe the computer could have a hissy fit, or it could slow down, or stop interacting or be naughty. That kind of interaction could be more helpful to a child’s development because it reflects our own instincts and behaviours. If the computer decides that 20 minutes is enough, or that we seem too tired to play, it could just shut down – and, in doing so, help us to learn what the right time to switch off feels like.â€

As voice devices get better at natural language processing and sentiment analysis, we’ll see new applications for smart speakers emerge that move beyond simple command and control (e.g. using our voice as a remote control for radio) towards an intelligent system that we’ll be able to talk with in a way that is more natural, like human conversation, and more responsive and adaptive to user interaction.

A recent study found that . At Â鶹ԼÅÄ R&D we believe that along with a shift to more human-like interactions, voice devices and other systems will get to a point where they can accurately sense mood and emotion and respond accordingly. Imagine a voice assistant that was able to read how you were feeling and change its behaviour, tone of voice or functionality to better suit your mood, in the way humans do – what if Alexa could give fast and to-the-point answers when you’re in a rush; interact cheerfully and playfully when you're feeling upbeat; and be soothing, restrained and low-key when you’re feeling tired or blue.

We think this is a really interesting concept and are excited about exploring this area with users. Your responses and insights will help inform and shape our work on Emotional Machines. The survey results will feed into a series of workshops we’ve got coming up where we’ll be exploring how machines might express different emotions through the modalities of gesture; sound; light and colour.

Following on from the workshops we’ll be building some prototypes and will be taking them into the wild to run some user tests so watch this space for updates!

  • Internet Research and Future Services section

    The Internet Research and Future Services section is an interdisciplinary team of researchers, technologists, designers, and data scientists who carry out original research to solve problems for the Â鶹ԼÅÄ. Our work focuses on the intersection of audience needs and public service values, with digital media and machine learning. We develop research insights, prototypes and systems using experimental approaches and emerging technologies.

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: