Βι¶ΉΤΌΕΔ

Research & Development

Posted by Βι¶ΉΤΌΕΔ Research and Development on , last updated

It's often traditional at the end of the year to look back at the last 12 months - and we are no exception!

See some of our big achievements over the last year in this round up - and as we're always working on the development of media and technology, this digest will give you a head start on where things are headed over the next year or two...


Live Next Generation Audio trial at Eurovision

The  was . So it felt appropriate to continue this tradition by using the event to experiment with the latest audio delivery technology. Having it hosted in the UK also made it an attractive event to perform some internal technical trials. We were able to trial the option of choosing between the Βι¶ΉΤΌΕΔ One and Radio 2 commentaries using the user interface on the TV or a smartphone, demonstrating how the content can adapt to different devices.

We've been working with the wider industry to improve audio experiences for our listeners through interaction and personalisation of the audio presentation. For example, you may wish to listen to the narration in a different language; or reduce the level of background music. The experience can also be improved by using immersive audio which is a step-up from surround sound. The audio adapts to the available speakers on the output device and can envelop the listener in sound from all three dimensions. We also announced  which lets audio producers and engineers create immersive and personalised Next-Generation Audio content and experiences.

5G at the King’s Coronation

Over recent years news crews have increasingly relied on mobile networks to get pictures from the heart of the action, they offer a great way to get to places that you just can’t reach with a satellite truck or cable. This means that there is computer hardware and kit available to broadcast from anywhere you can get a mobile signal. While this is OK most of the time, at big events the large mobile networks can get saturated with data very quickly as everyone tries to upload content to social media and journalists compete to send their pictures back to news channels.

Βι¶ΉΤΌΕΔ News approached Βι¶ΉΤΌΕΔ Research & Development following our successful trial of 5G Non-Public Networks (NPN) at the Commonwealth Games last year and asked if we could help solve this issue. The challenge was a big one - could we provide a private 5G network that was available for the days leading up to the event and during the Coronation itself? We wanted high uplink capacity over a large area which we could offer to news broadcasters from around the world. It led to what was the largest temporary private 5G network of its kind ever deployed.

The trial  - the broadcast technology industry's showcase for innovation which "[pushes] the boundaries of live and linear content creation and delivery."

UHD HDR

Image above of  (cropped) by  on , .

Βι¶ΉΤΌΕΔ R&D ensured that the Βι¶ΉΤΌΕΔ iPlayer UHD service provided the very best pictures of the Coronation of King Charles III. Beginning work with Βι¶ΉΤΌΕΔ Events late last year to provide UHD HDR production expertise for the live Coronation programmes, we worked with the Βι¶ΉΤΌΕΔ Studios Production team, manufacturers, and our Outside Broadcast providers to ensure that the coverage of this historic event was of the highest technical quality.

Engineering, Science & Technology Emmy® Award for Hybrid Log-Gamma (HLG)

Statue © ATAS/NATAS

Βι¶ΉΤΌΕΔ R&D's Andrew Cotton was named as the Βι¶ΉΤΌΕΔ representative, amongst the four key developers, in the  for Recommendation .

Human values

We want to understand people's complex and nuanced needs, helped by our ‘Human Values and Digital Wellbeing’ research projects, and through our experimental approach to qualitative research. As we realise the limits of traditional user research methods, we have been exploring new ways to engage with participants, encouraging them to share their most authentic perspectives through deliberation and discussion. We see a shift in the types of knowledge we have been collecting, away from tactical observations to more strategic patterns in behaviour.

We have used this to expolore  as well as attitudes towards the future of travel, plus views on its societal impact.

Responsible AI development

The media industry is not alone in facing challenges to innovating responsibly with artificial intelligence. Working out how to tackle the ethical and practical questions is the role of a new research programme - BRAID, or ‘Bridging Responsible AI Divides’ - which brings insights from the arts and humanities to bear on today’s rapid technical development. As a core partner, the Βι¶ΉΤΌΕΔ hosted the BRAID launch event, bringing together a diverse community of policymakers, artists, academics and industry representatives. So what did we learn?

Responsible innovation

Data and technology affects all aspects of the Βι¶ΉΤΌΕΔ, from our editorial operations and outputs to how audiences can discover and access our services and content. The Βι¶ΉΤΌΕΔ needs to be informed early about the editorial, ethical and social implications and impacts of uses of data and emerging technologies in order to innovate confidently and use these technologies in the public interest. We works with experts across the Βι¶ΉΤΌΕΔ, academia and industry to deliver timely research to help the Βι¶ΉΤΌΕΔ identify, understand and respond to these challenges.

Flexible media

Flexible Media

Imagine a world where the Βι¶ΉΤΌΕΔ makes content that’s perfect - just for you.

Content that’s tailored for your circumstances, preferences and devices. Programmes that understand your viewing habits, and flex to fit. Experiences that reflect the things you love, and offer extra information just when you might need it.

Speech-to-text

Speech to text: Volt

Improvements in machine learning have allowed us to train our own speech-to-text system. It’s found a myriad of uses across the Βι¶ΉΤΌΕΔ, from searching the archive to improving social media shareability.

Media provenance

AI-generated media affects our confidence in the authenticity of the media we now consume online - we now need to be able to answer the question: "Is this genuine?". Answering this means knowing where an image came from and what has happened to it after the picture was created. We are working with partners from across industries to produce an  and the latest version includes tools to help identify the origin of AI generated images.

Adaptive podcasting

What if you could make a podcast that knew a little bit about your user or their surroundings - what are their interests, are they listening at home or on the go? How might the date or time of day change the way the podcast sounds - what are the light levels of the room, is it a sunny or rainy day outside? What if the story could lengthen or shorten depending on how much time the user has to listen - are they on a long walk or a quick trip to the shops? Providing more personal audio experiences could help the Βι¶ΉΤΌΕΔ to better meet audience needs but giving each listener programmes and content that feel as if they have been made specifically for them, is not without its challenges.

We have open sourced our Adaptive Podcasting code base and an accompanying learnings document as we've been exploring the concept of individualised and contextualised audio experiences.

Βι¶ΉΤΌΕΔ News Labs is an innovation incubator charged with driving innovation for Βι¶ΉΤΌΕΔ News. We explore how new tools and formats affect how news is found and reported, share insights into our new ways of working.

Future technology trends

Late in 2022, we began to compile a list of technologies that we should be paying attention to and make some recommendations about their adoption to the wider Βι¶ΉΤΌΕΔ. We interviewed twenty-two people from the fields of science, economics, education, technology, design, business leadership, research, activism, journalism, and many points between. We spoke to people from both inside and outside the Βι¶ΉΤΌΕΔ and around the world. All of these people have a unique view on the future, and our report teases out the common themes from the interviews and compiles their ideas about how things might come to be in the near future.

Low carbon graphics

Βι¶ΉΤΌΕΔ Research & Development's Blue Room monitors consumer technologies' impact on the Βι¶ΉΤΌΕΔ and its audiences. This includes evaluating modern televisions and their features, including energy consumption.

Asking an initial question, 'how much energy do televisions use?' led us on a journey to develop and implement a new idea that we called 'Lower Carbon Graphics' (LCGfx), which we believe has already saved energy in homes across the UK. Many modern televisions include energy-saving features as standard. We wanted to see if Βι¶ΉΤΌΕΔ content could take advantage of these characteristics, and reduce energy consumption.

Innovation Labs

We are evolving a programme of Innovation Labs in a model we call the 'Labs Framework' as a way of bringing people, ideas, and technology together. We believe the new model will deliver significant benefits to our audiences and help us adapt our world-beating services for a changing and competitive world when audiences are fragmented and budgets are constrained.

Streaming latency

Viewers watching live television over the internet today are typically seeing the action with a delay of 30 seconds or more. In future, it won’t have to be that way. We’re working on reducing the ‘latency’ of internet streaming to match that of broadcast by streamlining the encoding and distribution chain and using new techniques enabled by the MPEG DASH and CMAF standards. We’re putting together a low-latency end-to-end system for prototyping new approaches to low latency and to test their performance. We’re also working to understand the network conditions that viewers experience and to model how low-latency streaming will perform under those conditions.

This year we made some enhancements to , including adding live versions.

Public service internet

Over the past twenty years television and radio have been complemented by the internet, which has become a vital communications platform for the delivery of , not least because it can be used for much more than delivering radio and television programmes to connected devices, important though that is. The two-way nature of the network creates a very different space from the one-way broadcasting model, and dynamic publishing tools like the World Wide Web allow the internet to host material that engages audiences in new ways.

Today, the scope for using the internet for public benefit rather than to serve purely commercial or government interests has grown. We are thinking about what this might mean and how the Βι¶ΉΤΌΕΔ could help create an internet that more easily supports the online ambitions of public service organisations of all types, around the world.

Artificial intelligence for production

How could computer vision and machine learning be used to assist in the production of television? One area in which we have experimented with these ideas is on the Watches (Springwatch, Autumnwatch and Winterwatch) series of programmes produced by . Here, we have helped the monitoring of the video and audio feeds coming from the many cameras that the production team have placed out in the wild.

This year we created Wing Watch - getting some of the data we provide to the production team out to the audience to enhance their experience with the feeds of wildlife cameras by adding data and highlights on the stream.

Improving picture quality using machine learning

The Βι¶ΉΤΌΕΔ has always pushed boundaries to achieve better video quality for both streaming and broadcasting - one example is the Βι¶ΉΤΌΕΔ’s contribution to the Ultra High Definition (UHD) standard. Many TVs now display broadcasts at 100Hz or more. Generally, broadcast content is recorded at a lower frame rate. Frame interpolation algorithms are deployed in new TVs to ensure that such content is played are at the frame rate required.

One problem with this is that interpolated frames produce a lot of motion blur on some TVs and this detracts from the programme. Traditionally, the interpolated frame is generated by computing the motion in-between frames and using this information to warp the input frames. This approach has worked very well in the past but handling large motion and changes in brightness and occlusions (where one pixel appears in a frame but not the other) is problematic. Artificial intelligence interpolation algorithms allow this problem to be mitigated and challenging sequences can be handled well by the proposed model we have developed.

Viewing recommendations

Modern audiences that are used to streaming platforms expect content to be high quality, but also relevant to their personal taste. Editorial teams juggle a variety of objectives in creating recommendations, for example ensuring that Βι¶ΉΤΌΕΔ iPlayer content is diverse and aligns with Βι¶ΉΤΌΕΔ values, while promoting specific shows to specific audiences. A lot of thought goes into this curation, but it can be difficult to anticipate what audiences end up seeing because personalised recommendation systems, at least in part, reorganise the content. Now a tool developed in Βι¶ΉΤΌΕΔ R&D can visualise the trade-offs that recommender systems might make when improving recommendations for one group of users might reduce the performance in another group.