Βι¶ΉΤΌΕΔ

Research & Development

Posted by Tristan Ferne on

As part of a new project we're researching new formats for online news. Here's the project brief that I originally wrote...

"This project will develop new re-usable online formats for Βι¶ΉΤΌΕΔ news. We will explore alternatives to current articles and online video reports for the new audiences, new technologies and new services in the world today. We believe, if successful, this work could generate new formats that drive growth and increase the distinctiveness of Βι¶ΉΤΌΕΔ news in an increasingly competitive and fragmented market."

"The predominant form of online news hasn't really changed since news first went online and is still pretty much the traditional 800-word newspaper article. There have been some innovations such as listicles and scrollytelling and some interesting experiments with embedded interactive data or atomising articles but no fundamental shifts in the format. Similarly, the TV or radio news report has changed little since moving online, beyond a change from horizontal to vertical rectangles and a continuing decrease in duration. It is still a linear watching experience."

"With increasing disaggregation of the news this may be a way to make our content more distinctive - either to make it more recognisable when consumed on other platforms, or to make more compelling reasons for people to visit bbc.co.uk."

To kick off the project I've been researching what new online formats for telling news stories exist. Ideally these should be things that aren't legacies from print or broadcast and try to use the affordances of digital.

Things like on what's going on with Russia and Trump.  that animates as you scroll the article. from Bloomberg that pulls in business data alongside a story, which integrates a video analysis of a fracas in Washington or this card-stye experiment from Βι¶ΉΤΌΕΔ News. 

I've tried to classify what we found into some clusters:

  • Video (short, vertical and with captions); pioneered by AJ+ and NowThis
  • Horizontal stories; swipeable cards like Snapchat Stories and its clones
  • Longform scrollytelling; from the original NY Times' Snowfall
  • Structured news; like the original Circa or the reusable cards at Vox.com
  • Live blogs
  • Bots and chat; from the chat-styled Qz app to the many attempts to deliver news to chat apps
  • Listicles; which Buzzfeed is famous for
  • Data journalism; or data visualisation really
  • Personalisation; though typically this is used to filter the choice of stories, rather than the story itself
  • Timelines; which I expected to be more common
  • VR and AR; read from my colleague Zillah for everything you need to know about VR in news
  • Newsletters and summaries; short daily briefings seem to be a trend right now

Plus we noted some other trends which aren't formats in themselves but are affecting the area; syndication and disaggregation (Facebook Instant Articles, Apples News etc), various "reading mode" functions to get rid of the clutter on sites and even computer-written stories.

We're collecting examples on  and do send me any new things you might have seen .

We're also interviewing industry experts and teams within Βι¶ΉΤΌΕΔ News. This week we watched a talk from futurist on the future of news, with a focus on AR (or ). And we read the latest where I noted this section…

“Despite greater exposure to online video news, we find that overall preferences have changed very little since we started tracking this issue four years ago. Across all markets over two-thirds (71%) say they mostly consume news in text, with 14% using text and video equally. This number has grown slightly in the United States but remains at under 10% in the UK and Nordic countries where more users get their online news direct from the provider. Importantly, there are no significant age differences; young people also overwhelmingly prefer text. Having said that, in focus groups and open responses, we do find that video is increasingly valued as part of a content mix, adding drama and context to important stories, to breaking news events such as the recent terror attacks in Paris, Nice, Manchester, and Brussels – as well as adding to the trustworthiness of content."

We are now starting to put this all together; industry research, stakeholder needs, audience and user research; to help us decide where to focus on on this year-long project.

We're also : if you're a creative technologist, experienced prototyper or web developer and fancy working on the future of news then please apply!

Meanwhile, this is what happened with the rest of the team...

Tellybox

Libby has been looking through all the feedback we've had throughout the project and figuring out how to understand it, present it and act on it. In her 10% time, Libby got one of the TV interface prototypes - Something Something - working with smooth playback on a Raspberry Pi.

Talking With Machines

We're developing an interactive voice drama and Barbara went to a run-through of the script and the actual recording with . Meanwhile Tom and Anthony have been working on the cross-platform story engine (we're still looking for a good name!); fixing bugs and writing unit tests. Andrew and Joanne have been finishing off the evaluation of the voice UI testing with children and the team based on their experience with these two projects.

A Higher Quality Radio Experience

Libby has been interviewing people about radio and audio to scope a new project. She had a very useful call with Lindsay Cornell, who suggested a definition for radio and commented about difficulties of navigating the complex radio interfaces in new cars.

Editorial algorithms

Half way through this sprint we said goodbye to Olivier after 6 years with IRFS (and the entire life of the Discovery team), as he moves to a new job as the Open Data Institute's new Head of Labs. As a result, we all spent a lot of time absorbing a lot of his knowledge and wisdom and spending time with users of our Editorial Algorithms tools to ensure a smooth handover, before giving him the traditional IRFS send-off of cakes followed by the pub. Happily, we received the final sign-off on the process of transferring the Content Analysis pipeline part of our Editorial Algorithms project over to Design and Engineering's 24/7 support team on his last day - the culmination of months of work by Olivier and the rest of the team, so it was great that he was here to see that achieved.

Tim has made some updates to the Freebird documentation, as well as planning the next steps for the platform. Chris has been improving the Cue Verb classifier as part of the Quote Attribution pipeline - constructing features from the presence and location of quote marks in the text, leading to a small increase in performance. He's also been building experimental foreign-language Starfruit classifiers in Spanish, Arabic and Russian - Performance was good in the latter two but poor in the former.

Finally, Chris attended the SUMMA (Scalable Understanding of Multilingual MediA) project’s User Day at Βι¶ΉΤΌΕΔ Monitoring in Caversham. SUMMA is developing a multilingual platform for news monitoring which supports speech recognition, machine translation and tools for automated knowledge base construction.

Public service internet

Tim and David have also been spending time talking to people across the Βι¶ΉΤΌΕΔ with interests aligned with our new public service internet stream of work. We have been encouraged by the number of people in the Βι¶ΉΤΌΕΔ with similar ideas and concerns. We've also inaugurated a 'Justice League' style cross-R&D working group on the Public Service Internet. This consists of a slack channel, mailing list, shared wiki space and informal monthly meeting. Like the original Justice League of DC comics fame, we are a loosely federated, non-hierarchical group of individuals with their own unique talents, powers and expertise, who co-operate in the service of the common good - this model of collaboration has worked really well for our object-based media projects and we're looking forward to applying it to the challenge of making sure the work we do online aligns with our public service values.

Our last piece of work on PSI with Olivier was a 'Theory of Change' workshop which David and Tim have subsequently built on in order to identify several concrete projects that we hope will contribute to this effort - more on them very soon.

Media analysis

Jana has been working on generating training data for our face recognition system automatically, trying to resolve particularly difficult edge cases where certain celebrities share the same name. Meanwhile Ben has been evaluating the software on video sequences that we have had manually annotated.

Matt and Chrissy have been refactoring the process to generate the for the speech to text system. This has meant pulling a lot of code out of Kaldi so it can be reused in the Tensorflow project. We have also set up our new GPU machine for training of these neural networks.

And Denise is looking into an NLP process to identify how a person is referred to in spoken English, for example whether they are being referred to in the third person. This data can help us use programme transcripts alongside face or voice recognition models to identify people in programmes.

And everything else...

Chris has prepared the agenda for the next conference call of the W3C Media and Entertainment Interest Group, and has been discussing our plans for the group with the other co-chairs. Sean and Chris have been working on some project ideas for prototyping with new and upcoming web technologies, and Chris has updated a list of the areas where the Βι¶ΉΤΌΕΔ is currently active at W3C. We're particularly interested in the Web VR specification.

Chris released a new version of with a new playSegment API method, contributed by Craig Harvie from the iPlayer radio team.

We've been collating some of the ways we work, for an “innovation playbook”. Libby, Joanne, Andrew had a really interesting meeting working through the processes of a couple of projects. We're going for a case study approach at Joanne's suggestion .

Libby attended a in Bristol and reviewed a book on RDF validation.

As well as Olivier leaving, Zillah also left the team. She's not gone far though, just off to commission VR programmes for the Βι¶ΉΤΌΕΔ. And then she popped up again on our TV screens...

 

Zillah on TV

 

Things we've read this week

The Bot Studio at Quartz also wrote about

translates facial expressions captured on a webcam onto Angela Merkel

I loved made from Wikipedia

And Libby loves , whatever it is