Βι¶ΉΤΌΕΔ

Research & Development

What we're doing

Spoken interfaces appear to be emerging as a class of device and service which manufacturers are committed to and people are actually using. is an obvious example, but and are also gaining traction, and Google , its own competitor to Alexa and Siri.

These devices represent an opportunity for a kind of personal, connected radio which the Βι¶ΉΤΌΕΔ would be well placed to explore - we’re already a familiar voice in homes across the UK due to , putting us in a unique position to explore possibilities for engaging listeners in a two-way spoken conversation. Audiences gain by having well thought-out content on their devices which can inform, educate and entertain alongside the inevitable slew of commerce-driven applications.

Talking with Machines is a project which will . We hope to learn enough to support other devices of this type and build a platform for generic support for these kinds of device.

Βι¶ΉΤΌΕΔ R&D - The Unfortunates: Interacting with an Audio Story for Smart Speakers

Alongside this practical work, we’ll be experimenting with prototypes and sketches in hardware and software to explore the types of interaction and content forms that these devices allow. There’s also the intriguing possibility of developing a prototyping method based on humans roleplaying the part of the device, since interactions with these devices should resemble a natural human conversation. We have already done some work on a similar method for Radiodan, which we may look to build upon.

Talking with Machines has a few goals:

  • To develop a device-independent platform for supporting spoken interfaces
  • To build knowledge in R&D (and on to the larger Βι¶ΉΤΌΕΔ) around spoken interfaces:
    • conceptual models, how to think about spoken applications
    • software development patterns
    • UX and interaction design patterns for spoken interfaces
    • what kinds of creative content work well for speech-based devices, and ideas around how to structure creative applications for this context

Βι¶ΉΤΌΕΔ R&D - Prototyping for Voice: Design Considerations

Why it matters

There’s links from this project to a few streams of work happening in R&D. As we start to understand the speech-to-text challenges and move towards building our own engine, there’s a lot of opportunity to work with our IRFS section’s Data team who are working on similar projects. In a more general sense, there’s work around that we could use and push forward. We currently have a PhD intern who will be working on modelling voices of Βι¶ΉΤΌΕΔ talent from large amounts of content, which could be interesting to play with in the context of spoken interfaces.

There’s also potential overlaps with discovery work around finding Βι¶ΉΤΌΕΔ media and personalisation, choosing what to watch/listen and even structured stories (e.g. interrogating a news story). One of the stranger (but fun sounding) suggestions we’ve had is a Socratic dialogue simulator for the !

The interactive radio aspects of these devices resembles work done in our North Lab on Perceptive Media and Squeezebox, and there’s a lot we can learn from .

There is a lot of interest in conversational UI and bots across the Βι¶ΉΤΌΕΔ, but this interest tends towards text-based, messenger-type interfaces. This project focuses on spoken interfaces, while learning from and contributing towards more general conversational UI work happening in the wider Βι¶ΉΤΌΕΔ.

The number of devices and platforms in the wild is expected to grow and it’s not hard to imagine a future in which an entirely new voice-driven platform opens up, either on mobile or specific hardware. And there’s potentially a large number of possible users: anyone who has access to a device which allows for a spoken interface and can play audio.

Our goals

The short-term goal is to prototype services we could offer and we hope the stream of work will drive development of a platform designed to provide support and applications for speech-driven devices in general. Once we’ve got a good, solid prototype we would like to develop standalone applications (or add capabilities to a core platform) based on earlier exploratory work and develop support for other speech-driven devices.

We're also hoping to develop a set of UX tools and techniques to help us think about and design voice UI.

-

Βι¶ΉΤΌΕΔ R&D - User Testing The Inspection Chamber

Βι¶ΉΤΌΕΔ R&D - The Unfortunates: Interacting with an Audio Story for Smart Speakers

Βι¶ΉΤΌΕΔ R&D - Singing with Machines

Βι¶ΉΤΌΕΔ R&D - Better Radio Experiences

This project is part of the Internet Research and Future Services section

Topics

People & Partners

Project Team