Βι¶ΉΤΌΕΔ

Research & Development

Posted by Olivier Thereaux on , last updated

Latest sprint notes from the IRFS team: storytelling, machine learning and packaging code.

(Image caption: Henry working on his "" look)

Editorial Algorithms

This sprint has mainly revolved around wrapping up a prototype we built with colleagues in Radio and Music and the help of agency , based on our recent work on sentiment analysis.

We also finalised work on two new algorithms: one is combining several of our text analysis tools to try and establish the "main protagonist" of a creative work (i.e. what the content is really about, as opposed to who or what is mentioned in the text). The other, part of our exploration in multi-lingual text analysis, was a machine-learning powered categoriser for articles in Farsi.

Packaging Python applications as Debian files

We have been packaging a lot of our applications as Debian files over the last few years, and have found it makes them incredibly easy to share with our peers. Tom has been packaging his work on speech / music discrimination which requires a lot of python dependencies, he’s found the  project to be very useful for sandboxing an installation.

Atomised news

We ran a small round of user testing of the latest prototype of Atomised News, bringing in 8 participants from the public.

Development continues: Ant worked on Continuous Integration, Barbara created stories in the CMS and joined Lauren in completing and reviewing the guidelines document for journalists on how to create atomised stories. Meanwhile, Thomas and Tom have started looking at the next iteration of this project and how to demonstrate more adaptive experiences.

Set top box

Some good discussions among the nascent team (Libby, Joanne, Henry). We identified the need for some qualitative research on how people use TVs currently, so we're scouting round for any academic and other research, and Joanne contacted M&A for existing research.

Radiodan

With the renewed interest in Radiodan from various sources, with the help of AndrewN (our much-missed colleague and now open source contributor) we've been reorganising the repos to make things clearer.

Libby's been testing Andrew's new provisioning script and trying , a javascript GPIO library with Radiodan; Henry's been building his own installation with a prototype in mind, while Libby's been making one for Beccy in Connected Studio.

VR and 360

Production still continues apace for the Turning Forest installation for Tribeca, with Zillah coordinating efforts from VRTOV in Australia, the Audio Research team in the North Lab and event staff at the festival.

Zillah is also overseeing two 360 productions; one which follows a fireman’s story through a burning building and another designed to capture landscapes in 360 audio as well as video. She’s also helping out with a VR film in production in Connected Studio by Aardman.

Henry has been researching the current state of VR & 360 production and starting to look at the application of theatrical techniques to VR, with a couple of good meetings with interested (and interesting) folks and a growing stack of research reading.

W3C and Web Standards

I spent a few days in Cambridge, MA (with a snow storm thrown in for good measure) at the W3C's Advisory Committee meeting, discussing the next big things for the web standards consortium. Security was, perhaps unsurprisingly, high on everyone's agenda.

Chris has been responding to questions about the proposed . 

Euromeme

Libby and Henry met with Vinoba and Chris Sizemore to talk about the common threads in our various pieces of work around synchronised video, including the , Knowledge and Learning's on Taster, and the BCS section's work on . 

Walls Have Eyes

Libby and Jasmine were asked to talk at the Design Museum last week at an . They were interviewed about the context and results of Walls Have Eyes and what we learned about privacy and personal data in doing so. The Designs of the Year Exhibition is coming to an end so we need to decide what to do with Walls Have Eyes next.

In Other News

We became members of , which enables us to use standard training and testing data for our speech-to-text and speaker id systems. Rob got posted a bunch of DVDs that he is slowly moving onto a hard disk.

Henry spent a few days in Manchester, visiting colleagues in the North Lab and attending the FutureEverything conference. A lot of exciting and inspiring stuff from both: highlights include Alia’s 360 video work and Chris Pike and team’s audio projects, and the conference sessions on AI & Intelligence, Community and the subconference on Uncertainty. Plus some on around Manchester in coinciding events.

Links

  • from Delayed Gratification magazine
  • ICO's recent report on
  • , an opinionated take on where the web is going