ΒιΆΉΤΌΕΔ

Research & Development

Posted by ΒιΆΉΤΌΕΔ Research and Development on , last updated

If you use a smartphone or computer as part of your daily life, chances are how you view the world is shaped by recommenders. Scroll through your feed on social media, browse for a meal delivery on your phone, search for a product with your favourite online retailer - in the background, machine learning algorithms are deciding what to show you next. Recommenders determine more and more what we see and don’t see in our online lives - so it’s important to understand why and how they’re making those decisions.

The has released - exploring the use and ethics of recommendation systems in public service media organisations. The ΒιΆΉΤΌΕΔ partnered with Ada on the research, as part of the commitment to transparency and accountability in our Machine Learning Engine Principles.

A number of ΒιΆΉΤΌΕΔ staff building and working with recommenders contributed to the report. Here, James Fletcher, ΒιΆΉΤΌΕΔ Lead, Responsible AI/ML is joined by Product Group’s Anna McGovern (Editorial Lead for Recommendations ) and Alessandro Piscopo (Lead Data Scientist, Datalab) to talk about how we ensure the ΒιΆΉΤΌΕΔ’s recommenders reflect our public service values.

We’ve been tackling this challenge in earnest for some time. Building on previous work in ΒιΆΉΤΌΕΔ R&D, Datalab started building recommenders in-house for our major digital products.

A Reithian organisation for the digital age

The ΒιΆΉΤΌΕΔ publishes thousands of pieces of content every day. Add in 100 years of archive content, and we surely have something brilliant for everyone. But it shouldn’t be a burden for our users to find what they want. So while human curation still determines most of what you see in our products, if you use iPlayer, ΒιΆΉΤΌΕΔ Sounds, ΒιΆΉΤΌΕΔ News or Sport, we’re increasingly using recommenders to suggest an article, podcast or programme that’s most relevant for you.

But how we approach recommenders is different from commercial platforms.

The ΒιΆΉΤΌΕΔ has spent the last 100 years bringing to what we do - from the mission to inform, educate and entertain bequeathed to us by the first Director General, Lord Reith, to our commitment to trust, impartiality, diversity and universality. And the current Director General, Tim Davie, has set us on a path to be “digital first … a Reithian organisation for the digital age”.

So we need to innovate, but in a ΒιΆΉΤΌΕΔ way. Clearly reach and engagement are important for us - we are a universal service and we want to inform, educate and entertain everyone in the UK. But we have an opportunity and an obligation to build our values into our recommenders. We need to strike a balance between what is relevant to you and what is important for you to see. It’s a difficult challenge, but it’s at the heart of the ΒιΆΉΤΌΕΔ mission.

Public service recommenders at the ΒιΆΉΤΌΕΔ - the story so far

We’ve been tackling this challenge in earnest for some time. Building on previous work in ΒιΆΉΤΌΕΔ R&D, we started building recommenders in-house for our major digital products.

Rather than being in opposition to human curation, we see recommenders as editorial decision-making at scale. Instead of editorial colleagues directly choosing what is presented to users, algorithms choose from hundreds of thousands of pieces of content, aiming to combine relevance with upholding ΒιΆΉΤΌΕΔ values.

The Ada report recommends creating and empowering integrated editorial and development teams and we agree - from the start the foundation of our approach has been constant collaboration between data-scientists and editorial colleagues.

First, we looked at how editorial colleagues select content for promotion across our services, to give us an initial guide to what good machine curation would look like.

We implemented business rules to codify some of the the subtleties of editorial decision making, allowing us to start bringing editorial intelligence to automation. Examples include rules to avoid areas of legal risk like contempt of court and defamation, or rules prioritising explainer content in News recommenders to safeguard impartiality.

Editorial colleagues are also central to our review and feedback process. The typical development of recommendation engines involve the selection of a range of metrics – mostly commonly measuring accuracy, but also including things like diversity or novelty – which are optimised in an offline setting, i.e. using previous user interaction data, rather than being tested with audiences.

To incorporate wider editorial concerns, models produced by data scientists are then reviewed from an editorial perspective, by means of a bespoke set of tools we developed in-house. The model is improved based on feedback, and so on in an iterative cycle to ensure our values are safeguarded.

We’ve also explored hybrid models combining the best of human curation and algorithmic recommendation. One example is “personalised sort”, whereby editorial teams choose a collection of content and the recommender sorts this according to a user’s history. Approaches like this help us begin to address issues around universality, excellence and diversity as highlighted in the Ada report.

A screenshot of some recommendations on ΒιΆΉΤΌΕΔ iPlayer

What’s next?

While we’re proud of the work so far, we agree with the Ada report that more needs to be done. As we innovate towards becoming a digital first ΒιΆΉΤΌΕΔ, it’s likely we’ll use recommenders more widely. And recommenders are just one aspect of a deeper approach to personalisation. We’re not just thinking about recommending existing content, but also how we make the most of things like flexible and data-driven content to personalise the content itself to our users.

To do this in a public service way, collaboration will continue to be key - not just between editorial and technical colleagues, but bringing in UX, audience researchers and others. And it’s not just about putting people in the same room - we’re thinking about how to support collaboration with everything from data literacy and data ethics training to building tools to help editorial colleagues understand and monitor recommenders better.

We’ll build on work being done on personal data stores in ΒιΆΉΤΌΕΔ R&D to give users better ownership and control of their data, giving them more control over personalisation and letting them tell us what we can and can’t do with it. We also plan to explore explainable AI approaches to improve the interactions of all stakeholders with our recommendation engines.

Another piece of R&D work on has already grown into a deeper exploration of how we can measure public service value. This will enable us to better specify the goals and measure the impact of our recommenders, avoid unintended consequences, and improve what we offer to users. But it’s a difficult challenge, not least because, as the report notes, “public service values are fluid, can change over time and depend on context.”

One example we’re exploring currently is the demographic diversity of content we recommend. It’s one of our core public purposes to “reflect, represent and serve” the diversity of the UK, so we need to be sure our recommenders aren’t homogenising what we offer to users, and we’re being fair to other stakeholders like content-makers and marginalised groups, and providing value to society more broadly.

At a more technical level, we’re looking to centralise and expand the business rules that power our recommenders - things like the appropriateness of showing certain topics next to each other. A centralised approach means that any changes or new rules can be quickly and consistently applied across all our recommenders.

We’ll balance central control with more contextual nuance, taking things like relevance, context, and tone into account to decide upon the inclusion of a piece of content into a list of recommendations. This will require better metadata, but also incentivises editorial teams to create that metadata so they can influence how content appears in recommenders.

And as we do all this, we’ll continue to engage with the wider public service ecosystem - learning from and sharing best practice as it evolves.

Talking about values

And one final thing, since we’re talking about values - we want to get better at talking about values.

We need to engage in a dialogue with our users and understand what they value and what they want from personalisation. Editorial colleagues need to be braver and better at articulating and making explicit values they want incorporated in our data-driven experiences.

Teams involved with recommenders need to be comfortable having conversations about trade-offs, because as we have multiple goals for our recommenders (measuring both consumption and values) we may not be able to optimise for everything at the all at once. And we need to continue to be transparent with our users about those decisions.

Ultimately we’ll offer a suite of recommendation strategies across all our products, so that we don’t just inform or educate or entertain. Reithian recommenders for a digital first ΒιΆΉΤΌΕΔ will do all three.