鶹Լ

Research & Development

Posted by Tristan Ferne on , last updated

In the first part of this series I wrote about why we think we should explain how Artificial Intelligence and Machine Learning work, what they’re good and bad at, and what their effects can be. In this second part I will look at some different ways of explaining AI, some different places where we could intervene in the world to explain and some projects which we are running at the 鶹Լ in this space.

What can we explain?

Assuming we do want to explain AI better, what can we do? Which bits of the systems should we explain? We think there are a number of different levels of explaining we could do.

  • We can try to explain how AI or ML works in general. Like the ways this technology “learns” and relies on training data, or how they make predictions based on new inputs. This would probably work best in an educational context.
  • We can explain how the AI in a particular system works. What is the dataset, what kind of ML technique does it use and how or what features are most salient to the results?
  • We can explain how an AI system came to a particular result, prediction or decision and what factors led to that. This could be embedded in a system to explain every decision it makes.
  • Or, minimally, you could ignore how this all works and just explain what the AI does and what the effects or results are.

And these are the 3 key questions we've used throughout our work:

  • What's important to explain about AI?
  • What's hard to explain about it?
  • Where can we do this explaining?

 

Six ways to explain AI (and some garden birds)

We've been thinking about different aspects of AI systems that could be explained and different approaches to explaining. Here are six that we think are important and that could be particularly effective.

  1. Creating mental models
  2. What it does and why
  3. Showing uncertainty
  4. Showing the features used
  5. Explaining the data
  6. Giving control.

To illustrate these approaches, we built a prototype; “A Machine’s Guide to Birdwatching” is a Machine Learning-powered application that attempts to identify common UK garden birds in photos. We built this prototype to explore ways to explain AI, using a relatively uncontroversial topic, in the visual domain (which seem to be easier to explain) and with a problem that’s understandable to most people. It builds on work done by colleagues for identifying birds and other animals for the 鶹Լ’s Springwatch series. Ben has written much more about how we the application works over here.

undefined

Our bird identification prototype was designed from the ground up to explain things. You can choose a picture to analyse or upload your own pictures. As well as attempting the identification, it lets you find out more about why it identified the bird as it did, and you can start to explore how it works and build up your mental model.

 

1. Build better mental models

We think that much of attempting to explain AI is about helping people make a mental model of the system to understand why it does what it does. Can we show how different inputs create different outputs, or let users play around with “what if” scenarios? Letting people explore how adjusting the system affects output should help them build a mental model, whether that’s changing the input data or some parameters or the core algorithms.

In our prototype, users can upload their own photos of birds, see the AI’s predictions and learn more about how and why it came to those answers. You can even play with uploading things that aren’t birds…

The AI misidentifying Chris Packham

2. What it does and how it does it

We can try to explain some of the basics of machine learning and AI, like explaining how this technology is based on large amounts of data and learns by example in the training phases. Or show how AI makes predictions and classifications, but only based on what it has learned from. We could also explain what the system uses AI to do. What are its limitations? Where doesn’t it work so well and what might go wrong? “”, for example, are designed to explain what AI technology is used, where it performs best or worst or what it’s expected error rate is. These model cards are aimed at developers and interested professionals, but what might be the consumer equivalent?

3. Show (un)certainty

Most current AI systems are based around predicting a result for an input, based on previous examples that have been learned. We should show where there is uncertainty in the outputs, and/or indicate the accuracy. This could be a straightforward confidence label, or a ranking of possible predictions. In the bird ID app we show other high-scoring predictions to the users to help them understand what’s going on.

Bird identification tool

In our prototype we used words to represent our confidence in the results (), other systems commonly show confidence as a numerical percentage. But we also show several top possible predictions, not just the top result. This might show that the prediction was difficult for anybody (all these birds also look the same to humans) or particularly hard for the system (humans can easily see the difference, but the AI can’t distinguish between them). Showing the results might also help the user make a better guess at the correct answer from the possibilities shown, with the AI acting more as an assistive technology.

4. Understandable features

Can we find human-understandable features used by the AI, and can we find useful ways to show them? What are the features that are most important in the model and can we use this data to generate human-readable or understandable explanations? This technique is highly coupled to the problem domain &岹; things that are visual are probably easier to show than more abstract problems.


Neural network analysis of the images

Focus and detail from the neural network

In our bird application we use some to display what the AI focuses on and what detail it “sees” to make its prediction. In the example above you can see that it is focusing on the bird, not the fence post, and particularly on the beak and eye area. As a user you can then start to think about whether it is focusing on an identifying feature of the bird, or some extraneous detail.

5. Explain the data

All AI is based on training data and we can try to explain more about the data used to give insight into the system. Where does the data come from and how was it collected? What are the features used for training? How much data is there? Might it be biased, and towards what?

Bubble visualisation of the dataset
Here we show an overview of the bird dataset and we explain and show it’s make up. The area of each bird image is relative to the number of examples of each bird in the dataset. As you can see, our dataset appears to be biased towards ducks, starlings and sparrows! We used birds as an example so that we could start to show issues like bias and dataset composition on a relatively neutral subject.

6. Give control…

Testing that we carried out on explaining TV programme recommendations showed that people appreciate being given some control over the AI; being able to tweak parameters, see and edit the personal data that is used by the AI or provide feedback on the recommendations to tune the system. Being able to control the system should also help build better mental models.

Mockup of a controllable recommendations interface
  Mock-up of a controllable recommendations interface

Other resources we’ve found particular useful in this work with its focus on explaining AI to users include these sets of design guidelines from and .

Places to explain


A wider question we have been asking is where in the system (or the world) should you be explaining things, for the best possible understanding. By thinking of the world as we can think about how the whole might be changed.


Layers of explanation diagram

On the “inside” of this diagram are the type of things that I’ve written about so far in this article…


1. Ways to inspect and understand within your AI system; for engineers and more technical users of AI systems.

2. Explaining what’s going on in the interface to the AI. This can be hard though, it’s arguably making the application harder to use and providing additional friction. And that’s assuming there is a legible interface in the first place —given the invisibility of AI.

3. Explaining in the context around the AI; you could write a tutorial about the AI in your system or include explanations in the marketing material.

 &岹; 

But beyond a particular application or service, there are other layers in society, culture and the world where we could explain AI.

  • We could explain AI better in our education system; whether that’s school, college or later in life.
  • We could aim for AI to be explained and represented more accurately when it is featured or mentioned in the media, whether that’s in the news or in popular culture like films.
  • The to talk about AI how it’s generally understood.


I think, as we go outwards in these abstraction layers, the methods of explaining become further away from a particular instance of technology and become slower to take effect, but maybe become potentially more influential in the longer-term.


What we’re doing at 鶹Լ R&D

We think it's particularly important to help younger people understand AI, they're going to be more affected by it over their lives than most of us. To that end we have been prototyping some games and videos that could be used in educational situations and we’ve been doing outreach work with schools to see how well these work.

A humanoid robot in front of a blackboard of equations


Here is an example of AI in the media. You’ve probably seen images like this used to represent AI. But it’s not very accurate, most AI isn’t robots and they certainly don’t write down equations and ponder them. Other frequently used images of AI contain glowing blue brains or gendered robots. We think we could do better than that. We’re currently working towards making a better library of stock photos for AI with our partners at and the ; images that are more accurate, more representative and less clichéd. Hopefully, eventually, this sort of intervention might slowly seep into the world and change mental models and our understanding of AI in a lasting way.

AI-generated image of machine learning
VQ-GAN, an AI system for generating images, created this when prompted with “machine learning”

The 鶹Լ has also developed internal guidelines for AI explainability, focusing on how to make machine learning projects more inclusive and collaborative and interdisciplinary, ensuring that all our colleagues can feed into the design of our AI-based systems. We have recently published these Machine Learning Engine Principles which describe how to make responsible and ethical decisions when designing and building AI systems.

With our and our commitment to responsible AI and to explain our use of AI in plain english we have been exploring innovative ways of explaining AI. We want to raise awareness of the different ways AI can be explained, and kickstart this in different places around the 鶹Լ and elsewhere.

This post is part of the Internet Research and Future Services section

Topics