Over the past few months we have been running our Perceptive Media prototype at .
Many of you have experienced our first perceptive media audio play "Breaking out" online and we thank you for taking the time to do so.
Before we close it down and run through the many results, we wanted to ask one again if you haven't fill in the feedback form now is a so.
The Perceptive media can be applied in many different ways. Because of this we have been talking at different conferences to very different audiences about Perceptive Media. (You can see a selection of the
From Web developers at to narrative writers at the . Radio producers at the to the Publishing industry at the . All are interested in the concept and possibilities.
Our focus is still on making it work for broadcasting but its beenÌýfascinating thinking about how it could change publishing and other media industries.
The research questions remain the same and we will be working on other aspects of the concept in the near future. But for now, you might enjoy the higher level overview ...
Hopefully enough to encourage you to or send it to friends and family using the sharing buttons on the site.
I am not a morning person. Waking up at 5.30am to get to the Ricoh Arena in time to set up our demos for Teen Tech was painful.
Despite feeling like zombies, we made it there only a little behind schedule and promptly began assembling our stall. There were five of us: me from R&D, Ulrich from Â鶹ԼÅÄ Academy, and James, John and Darren, three regional broadcast engineers based in Birmingham.
In case you’re not familiar with it, is the brainchild of and Chris Dodson, and attempts to inspire young people to consider careers in technology. It’s a bit like a careers fair - a large room in which companies set up stalls demoing interesting technology, and the school children move from stall to stall interacting with the demos and talking to the scientists and engineers behind them. Maggie led the day, conducting both the welcome session for the students and the debrief. She is a wonderful host and clearly incredibly passionate about Teen Tech – and its continuing success is a testament to this.
Maggie talks to the school students
Read the rest of this entry
This week Vicky's been talking with the team and thinking about the issues and opportunities associated with our theme of "Playful Internet of Things (IoT) Futures" to generate topics for discussion at the in November.
If you're interested in taking part in the event, please get in touch with us saying why you're interested in the IoT and what you hope to get out of the event.
Attendees include LEGO, Ogilvy Labs; the Science Museum; SODA; Â鶹ԼÅÄ Worldwide; Hasbro; Symplio; Uniform; Goldsmiths; Dundee University; Ravensbourne; Manchester Met Uni & Liverpool John Moores University.
Read the rest of this entry
Hi, my name is Frank Melchior and I’m head of audio research at Â鶹ԼÅÄ R&D. I’m also responsible for the Â鶹ԼÅÄ Audio Research Partnership and after one year of inspiring and fruitful collaboration I like to report about our first annual conference in MediaCityUK last September. Two days packed with keynotes, a poster session, brainstorming sessions, a panel discussion and flashlight presentations of the partners has given the opportunity to exchange ideas and develop new collaborations within this unique new way of working together. The Â鶹ԼÅÄ Audio Research Partnership was launched in 2011Ìýand the video below will give an introduction to the idea of the partnership and the anniversary event.
In order to see this content you need to have both Javascript enabled and Flash Installed. Visit Â鶹ԼÅÄ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.
Read the rest of this entry
I'm not sure we've managed anything quite as exciting as impressing Dick Mills this week but there's still loads of interesting work going on. Here are some of the highlights.
On their second week working together, the VistaTV sprint team of Libby, Andrew W, Anthony O, Chris Newell and Dan put the finishing touches on their first working prototype.
Read the rest of this entry
ÌýIt's been a long time since the last series of blogs on Orchestrated Media.Ìý Time for a catch-up.Ìý Firstly, we've stopped using the term orchestrated media, and instead talk about dual-screen and companionscreen.Ìý Dual-screen reflects where things stand currently:Ìýthe companion service can synchronise against the broadcast content using various technologies.ÌýSee Steve's blog about that.ÌýThe Â鶹ԼÅÄ's launch of dual screen for ÌýAntiques Roadshow is imminent.
Looking ahead, we see the next generation of services allowing a wider set of companion services, where the TV, the companion, and the Web, are inter-communicating, allowingÌýa web site or a companion app to both monitor and control the TV.Ìý This gives TVÌý-awareness on web-sites, and web-awareness of TV services.ÌýÌýEach of these three domains could be the launch-point for companion screen services, and enage the other two domains as needed.Ìý Companion screen pertains to this widerÌýrole for the companion device, compared to today.
Interaction layer APIs or something else?Ìý
We strongly believe that something else is needed for engaging companion experiences with the broadcaster's content, where the audience can socialise around the content and interact with it in a variety of ways.
Read the rest of this entry
The Little Sun installation created by Â鶹ԼÅÄ R&D in conjunction with Studio Olafur Eliasson recently came to a close at the Tate Modern, having entertained over 10,000 visitors. If you missed the exhibition, or would like to know more about it, we've produced a short film to fill you in. If you want to know more after watching it, you can also check my previous blog post.
Enjoy the film!
Ìý
In order to see this content you need to have both Javascript enabled and Flash Installed. Visit Â鶹ԼÅÄ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.
Ìý
Hi, I'm John Boyer of the
Distribution Core Technology team here in Â鶹ԼÅÄ R&D, and this year our group took the
'halfRF' digital high definition radio camera toÌýthe in Amsterdam to launch it to the industry.Ìý The technology was well recieved, but it's absolutely cutting edge stuff, and getting it ready for a high performance demonstration at a major showÌýsuch asÌýIBC is far from straight forward.Ìý In this post I'll explain why the project started,Ìýstate of progressÌýat the start of the show planning process, and the amazing work our team acheived to get it into action last month in Amsterdam.
Members of the halfRF team demo the technology on the EBU stand at IBC
So, why are we doing this project al all?Ìý Radio spectrum is a valuable and finite commodity- there is simply only so muchÌýradio frequency space available.Ìý These daysÌýmany more organisationsÌýand companies want a share of that spectrum.Ìý Once upon a time TV and Radio broadcasting and radio links forÌývideo/ audio contribution links were rare and had relatively uncontested use of the airwaves in key frequencies.Ìý Nowadays though, with the rapid growth of mobile digital services, that same spectrum is under high demand, and governments around the world are carefully managingÌýthe licensing of itÌýfor different applications.Ìý As the popularity of high quality radio links for program making has increased, so tooÌýthe move to High Definition (HD) video has also created a demand for yet more data and hence more pressure still on the spectrum.
Â鶹ԼÅÄ R&D recognised that something needed to be done and three years ago we created the Advanced RF for HD Radio Cameras project to look at RF (radio frequency)Ìýtechniques that could be used to make radio cameras more spectrally efficient.Ìý Our aim has been to use techniques such as MIMO (Multiple In Multiple Out) to create a system that uses half the spectrum compared with current commercial systems, both standard definition (SD) and high definition (HD).
Read the rest of this entry
This blog post is written by to summarise the findings of the World Service tagging experiment, run by Â鶹ԼÅÄ R&D this summer.
Shauna Concannon recently completed a 5 month academic internship with Â鶹ԼÅÄ R&D as part of her studies on the EPSRC funded . During her time with R&D, Shauna has contributed to our exploration to improve the metadata of large audio archives and how we might encourage users to help us do that.
Read the rest of this entry
Welcome to weeknotes #125! As a couple of projects come to a natural
break we've had time this week to do a few things a bit
out-of-the-ordinary.
Libby and Chris Newell have been following up their VistaTV workshop
by taking some of the ideas the team generated and working up a few
quick prototypes. Dan, Andrew W and Ant were drafted in to help and
following a productive couple of days have created a variation of the
classic that reflects the popularity of a programme. The team have
now started to experiment with some of the other ideas from the
workshop.
Read the rest of this entry
We moved offices this week, away from , to shiny and modern West London. This disrupted us for around 20 minutes and then we got on with researching and invented the future (with a bit of dissemination and meet-the-colleagues on the side.)
Read the rest of this entry