ΒιΆΉΤΌΕΔ

Archives for September 2011

Prototyping Weeknotes #79

Post categories: ,Μύ

Kat Sommers | 16:56 UK time, Friday, 30 September 2011

Phew! Our new room is a tad warm on the best of days, so this week's heatwave has had us throwing open the windows and fanning ourselves.

The week was dominated by a team away day on Thursday to Manchester, or, more specifically, the new MC:UK development in Salford Quays.

The R&D Prototyping team at MC:UK

Read the rest of this entry

Orchestrated media-based TV - gazing into the future

Post categories:

Jerry Kramskoy Jerry Kramskoy | 14:49 UK time, Tuesday, 27 September 2011

Wow - we weren't expecting to hear mention of Orchestrated Media (OM) from , Googles' chairman and CEO, during his at the .Μύ

β€œPerhaps the most exciting of all, at least for a technologist like me, are the opportunities to integrate content across multiple screens and devices,” said Schmidt. β€œAnd I am fascinated by the ΒιΆΉΤΌΕΔ’s notion of β€˜orchestrated media’ – where the show you are watching triggersΜύ extra material on your tablet or mobile, synchronised with the programme.β€œ

Maybe we’re just getting a small taste of what a possible future could be like with Orchestrated Media experiences… for some months my team have been collaborating with colleagues from across the ΒιΆΉΤΌΕΔ to help them create a dual-screen orchestration of the Secret Fortune series currently broadcast on Saturday evenings.

So what are we up to?Μύ

A closed pilot is running that enables a couple of hundred people to participate in the Secret Fortune Quiz through a mobile app (a hybrid of native and Web).Μύ In this game each participant uses their mobile to engage with the format of the show, answering the questions and getting scored throughout the game.See .

How does it work?Μύ

Orchestration employs synchronisation between the TV (radio) and companion devices (typically mobile, such as smartphones, tablets, laptops). Synchronisation ranges from continuous (such as following a broadcast game) to one-shot (here's some Web content about this show).Μύ Two different modes of synchronisation are used.Μύ Asymmetric synchronisation causes the companion content to sync to the TV (radio) content.Μύ Symmetric synchronisation supports this, but also the TV sync'ing its content (catch-up, VoD) to the companion content.

Dual-screen orchestration employs asymmetric synchronisation either based on audio watermarking or on IP-delivered events.Μύ My colleague Steve Jolly is shortly blogging about technical aspects we contributed to this pilot.Μύ In fact, we've been doing so much work around Orchestrated Media, Steve and I are going to need to blog about it in a few separate posts.Μύ Today's post will look at some of the background to our thinking, how media is orchestrated and our work to date.Μύ

Where is this heading?

We are keen to work with the industry to make common standards a reality and so are keen for the big players to suport us and collaborate in this initiative.Μύ We're still working in understanding where our technologies fit in the future standards landscape.Μύ This is discussed a bit more below.

A brief history of OM

About three years ago I started thinking about how to use mobile to differentiate future TV services. A perfect storm was brewing up in terms of various technology innovations and improvements, from smartphone compute power, GPUs, 4G, UI capabilities, fixed broadband bandwidth, and so on. Mobile was predicted to be the seventh media form, way outstripping the fixed Internet.Μύ This got me thinking, amongst other things, about synchronising related media on the mobile and TV, to create a new form of user experience. Of course, now mobile incudes tablets and also non-fixed CE-devices, like digital picture frames.Μύ The team rapidly fleshed these ideas out, evolving and crystallising these concepts in the shape of code and prototypes and gaining some recognition outside of ΒιΆΉΤΌΕΔ R&D. Steve added synchronisation support to the Universal Control API he had been working on for Digital TV accessibility, and Orchestrated Media came to life.

Μύ

Μύ

In order to see this content you need to have both Javascript enabled and Flash Installed. Visit ΒιΆΉΤΌΕΔ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

Μύ


Now there are rumblings afoot in the industry ... I’m not talking about using companion devices as TV remote controls, or as EPGs, nor video streaming around the home, nor bringing popular desktop apps to the TV (Strategy Analytics have that consumers aren't enthusiastic about these apps, not finding any more value from them being on the TV rather than the desktop. Μύ The exception here are media apps which offer video for consumption on the TV, such as iPlayer, Youtube and Netflix).Μύ I’m talking about the magic made possible through a sometimes intimate sometimes loose bonding of TV and mobile content.Μύ

Μύ

In order to see this content you need to have both Javascript enabled and Flash Installed. Visit ΒιΆΉΤΌΕΔ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

Μύ

Credit must be given here to R&D's prototyping team, who created the initial tablet content which we then UC-enabled.

We're not the only people thinking about these opportunities and we believe a new wave of TV / mobile experiences could shortly emerge, provided of course the costs make sense and the value chain stacks up for everyone. CTAM (Cable and Telecommunications Association for Marketing) very recently engaged Nielsen to carry out a on usage of video apps, including "sync-toTV apps" (their term for dual-screen apps), and how they are perceived.Μύ This gives some support to the view that this new wave may be emerging to increase audience interaction with programmes and media. Μύ

Given this, I thought I'd provide some more background into OM and do a bit of crystal-ball gazing in later blogs over the coming week or two.Μύ These will look at 'Current Digital Services to the Connected ΒιΆΉΤΌΕΔ', 'Future Digital External Services', 'OM genres and synchronisation aspects', and have a crack at what 'OM-enabled TVs' may look like.Μύ Imagine an "OM-enabled IPTV" that puts the consumer in control of the user experience on the TV, rather than the service provider.Μύ Mad?Μύ Look at mobile services pre- and post- Apple and Google.Μύ (Of course, there are regulatory, even statutory, issues to consider).ΜύΜύ

There is no doubt in my mind that the home is set to become a major battleground amongst the industries who want to differentiate their digital services from others converging on the home. The prize sought for - the attention of the consumer and acquaintances (social and family circle). This means the Digital TV experience will change.

Orchestrated media (OM)

We believe a broadcaster can connect with its audience at individual and social group level, by offering a downloadable IP-based content-aware service to mobile companion devices.Μύ This service is packaged as a library built into an app (native or Web) and is intimately aware of the TV (radio) content being consumed. This appΜύ provides a highly effective form of engagement.

A whole new value chain opens up here. The obvious case is for an advertising agency working hand in hand with a production company to create a content-aware service, such as presenting a deal to buy a holiday in the same location as being shown in the TV program . However, what about media shown on screens in public spaces (shops, museums, galleries etc.)? This media is also impersonal, so a branded app can offer a similar downloadable layer to reach individuals alongside the message in that media. Other members of the value chain could include synchronisation technology providers, and maybe aggregators of synchronisation services, and channels / content owners selling rights to synchronise against various parts of the programme, to several brands at once, to name a few.

Orchestrating the media

The key to any of these experiences lies in the ability for devices and services to discover each other, and to synchronise media as and when needed across devices, to "orchestrate" their presentation and also to orchestrate interactivity on the companion device with the TV media content (and TV apps).Μύ This means we have to identify what content is being consumed on the TV also.

Asymmetric synchronisation can be achieved in a couple of ways: audio or video synchronisation, or Internet-delivered events. Audio synchronisation involves either watermarking or fingerprinting. There are various factors contributing to the successful application of these technologies for synchronising, but none requires any additional software on the TV.

IP-based synchronisation can be achieved through a variety of mechanisms, either inferring or extracting programme-related time points from other services in the cloud, such as a programme metadata analyser, or analysing the broadcast transport stream.Again, no additional TV-resident software is needed. However, latency and accuracy can be an issue given that homes usually have broadband access over unmanaged networks with no QoS guarantees. TV, Cable and Satellite also have relative delays in their distribution.

The IP-based approach cannot handle catch-up or on-demand TV or time-shifted TV, without a packaging format for storing the video and IP events together (at least logically).

Symmetric synchronisation requires access to control functions on the TV and the ability to select content for the TV from the content set known to the TV.Μύ This of course is where standards will play a vital role inΜύ interoperability.

Our work

So, a large part of our work in R&D recently has been developing an API and architecture for a framework that supports different pluggable synchronisation mechanisms, currently asymmetric, which has been deployed for the Secret Fortune pilot.Μύ This provides a simple Javascript abstraction for a content-aware service, which can be developed without concern to the physical mechanism employed. We have been exploring audio- and IP-based sync plug-ins. We are adding in a symmetric sync module shortly. And we are also investigating various asymmetric technologies with regard to robustness, accuracy and intrusiveness.

As noted above, by supporting symmetric synchronisation a host of new user experiences based on OM open up, limited only by imagination, but this may involve additional software on the TV. We have working versions of this in MythTV, which includes a prototype implementation of our Universal Control API that also supports additional functionality for accessibility.Μύ We have developed several demonstrations of the OM experience around this.

So we now need to consider the simplest route to enabling the OM experience, which can be integrated with little effort and resource into future IPTV boxes. Cross-industry standardisation, de-facto or by committee, is the key here. We are actively engaging with the Web and TV Interest Group in W3C, to try and encourage some new APIs to become available through browsers that we believe are needed to make life easy for Web developers and hence build a wide community.Μύ As I said earlier, we are also keen to work with the industry to make common standards a reality and so are keen for the big players to suport us and collaborate in this initiative

Next blog

This will discuss which OM experiences suit which program genres and what synchronisation is required.

Read the rest of this entry

The Programme List

Post categories:

Tristan Ferne | 14:16 UK time, Monday, 26 September 2011

There's so much to watch and listen to these days. We often hear about exciting new TV and radio programmes through friends, adverts or reviews. But do we remember to watch them? And how easy is it to find something we half remember? from ΒιΆΉΤΌΕΔ Research & Development helps you quickly make a list of programmes when you hear about them and then easily find them again later. We support all national ΒιΆΉΤΌΕΔ TV and radio channels as well as some partner TV channels. Until now the prototype has been invite-only but now it is open to anyone with a Twitter account (and a love of lists).

Read the rest of this entry

A wonderful video for BarCampMediaCity

Post categories:

Ian Forrester Ian Forrester | 12:31 UK time, Friday, 23 September 2011

During last week's we were able to shoot a short film capturing the event and why the ΒιΆΉΤΌΕΔ as its host, had a lot to give to the BarCamp community. We hope you will get a real sense of the diversity of people and ideas.

In order to see this content you need to have both Javascript enabled and Flash Installed. Visit ΒιΆΉΤΌΕΔ Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.

Please give us any feedback or ideas for future events.

Prototyping Weeknotes #78

Post categories: ,Μύ,Μύ

Tristan Ferne | 17:15 UK time, Thursday, 22 September 2011

Hello. Weeknotes are a bit early this week. This morning was mainly taken up by an all-staff meeting and this afternoon I've decided to go round and ask everyone what they're doing...

Read the rest of this entry

Looking back at a wonderful weekend at BarCampMediaCity

Post categories:

Ian Forrester Ian Forrester | 17:48 UK time, Wednesday, 21 September 2011

From 1030 on Saturday 17th till 1600 on Sunday 18th September we hosted the , in Quay house at MediaCityUK.

With 5 years of planning and 200+ attendees over 36 hours there were lots of great ideas, unique collaborations and interesting discussions.

originated in the US and are ad-hoc gatherings born from the desire to share and learn in an open environment, something ΒιΆΉΤΌΕΔ R&D affectionately endorses. The feeling always was that the ΒιΆΉΤΌΕΔ could be the perfect host for BarCamp. We were not wrong, in fact, it was one of the largest BarCamps ever held in the UK.

BarCamp is all about the ideas and sessions. People came to MediaCityUK for this event buzzing with new ideas around technology, future media and a range of interests. No running order, no set agenda – just a big empty β€˜Grid’ of 20-minute slots where attendees could post their ideas to present a session over the course of two days.

We shot a video through out the weekend of some of the great people involved in the BarCamp, but the video only gives a minor glimpse of what its all about...

A few trends were noticeable during the weekend…
Quite a few of the talks surrounded the areas of hardware hacking, inspiring the next generation, gamification, the balance of security and privacy and even the future of TV. Here are some of the session highlights around these themes:

Hardware Hacking:
Tips to build a 3D scanner; ; Open frameworks and kinect hacking; Cloud robotics and Android.

There was quite a few talks related to hardware hacking (the legal act) and although quite tech focused, had enough broad appeal to rope in large numbers of people. R&D's own Matthew Shotton's tips for building a 3D scanner had a sell out audience, with almost 50 people.

Inspiring the next generation:
; ; ; A proposal for a ΒιΆΉΤΌΕΔ driven Code Lab.
The Code Lab proposal by Alan O'Donohoe, a computer teacher from Preston, was the most interesting but also answered the question of should programming be taught in schools? The proposal turned out to be a pitch for a serious idea, but of course not in any way connected to the ΒιΆΉΤΌΕΔ.

Gamification:
; Why don’t PC gamers scowl at Xbox players?; ; Anatomy of a web game.
The Brilliant or balls talk was given by Lancaster's and really got people thinking about good and bad uses of gamification. The rest addressed aspect of gaming not usually talked about.

Security and Privacy:
How to protect your website 101; ;Μύ .
These sessions were given by physical and network penetration testers and focused on giving out useful information about common mis-perceptions in security.

And finally the future of the TV:
What is HD?; ; How many screens do we need?; Connected TV apps.
Challenging sessions from individuals on the edge of the traditional TV world. Very similar to the kind of discussions had at the Edinburgh TV Unfestival 3 years ago.

All the talks were fascinating and ranged from open discussions to early presented research. It was great to see the group conversations and people pairing off into the corners of Quay House for a quiet talk over dinner.

It’s the nature of BarCamp that no one knows where these conversations will lead but I can certainly say that ΒιΆΉΤΌΕΔ R&D staff were actively involved in the these conversations. Expect to see these topics develop and morph into something in the near future.

And to end this post, I refer to Mike Furmedge aka, …

β€œCan’t really emphasise enough how impressed I was with #bcmcuk, never have I been so educated while also being so well fed and watered :)”

Prototyping Weeknotes #77

Post categories: ,Μύ

Chris Lowis Chris Lowis | 11:56 UK time, Monday, 19 September 2011

The week started early for Chris Needham who was busy at the IBC in Amsterdam. He was demoing our work together with colleagues from and on the EBU stand. George took over the demoing on Monday and Tuesday and demoed some of our recent work. He got chance to attend the session and said hello to some familiar faces.

Read the rest of this entry

Create your own ΒιΆΉΤΌΕΔ QRCode

Post categories:

Duncan Robertson Duncan Robertson | 12:00 UK time, Thursday, 15 September 2011

A few years ago, when I was working as an engineer on the , we started creating a QRCode for every ΒιΆΉΤΌΕΔ programme. Here's an example for Dr Who. A if you are unfamiliar, is a type of matrix barcode (or two-dimensional code) designed to be read by smartphones.

Read the rest of this entry

Prototyping Weeknotes #76

Post categories: ,Μύ,Μύ

Chris Godbert | 14:07 UK time, Monday, 12 September 2011

We moved offices this week and aside from the inevitable last minute niggles with networks and phones it all went refreshingly smoothly. We're in the same building but on a different floor where we have more space and a distinctively 70s feel; something akin to an episode of but with less shouting.

Read the rest of this entry

Design development of a programme list prototype

Post categories: ,Μύ

Theo Jones | 10:03 UK time, Thursday, 8 September 2011

Since Tristan's original blog post, we've been busy building a lightweight prototype to help people remember and keep track of the programmes that interest you. This all feeds in to our wider interests around connecting services and devices to make more personal experiences around television and radio. This post is a project update on The Programme List prototype (nee Watch Later) prior to launching the prototype.

Read the rest of this entry

Prototyping Weeknotes #75 (2nd September 2011)

Post categories: ,Μύ,Μύ

Olivier Thereaux | 13:44 UK time, Monday, 5 September 2011

With a bank holiday on Monday, one would expect fairly short notes this week. And yet...

A couple of new faces made their appearance in the office. Stuart Bayston, Design Trainee, has joined us for September and will be joining Nina Monet from news and our own Theo in working on prototypes and testing ideas on "following the news".

Barbara joined the Producer team, starting in the office on Thursday but actually starting with a trip to Berlin, joining George at a 2-days meeting for the European . Barbara seemed very happy to meet the project partners and get a grip on the dependencies between each part of the work. George was a little less excited about the discussions around a 200 page document he hadn't been sent, but did enjoy a smashing Ethiopian meal.

Meanwhile, at the office, Vicky joined us from Salford on Tuesday, and we all had a work planning session for everything that didn't fit into the previous two sessions. We got a bit further with this one, trying less hard to group things together and a theme emerged around building platforms for our future work. She stayed in London on Wednesday too, visiting R&D Production Labs in White City. Production Lab is a real studio environment and commissions mini productions to evaluate new experimental production tools. Alia was leading this study, working with a mix of freelance ΒιΆΉΤΌΕΔ production crew to get feedback on a collaborative tool for editing and storyboarding on location. Really exciting stuff.

With everyone back at the office on Thursday, we were treated with a great show and tell:

Yves showed his recent work on finishing the first ABC-IP deliverable, describing the different data sources taken into account in the project. He has been working on better keyword recognition in audio streams, matching them to DBpedia, and investigating disambiguation strategies. He built a small Django web application for browsing the audio archive using the extracted keywords.

Duncan and Chris L demoed (again) the first news linking prototype (currently codenamed "what the papers say") which they spent a good chunk of the week deploying so we can start testing it with potential users in the Newsroom. I ran the test with link journalist Clare Spencer, and while there is still a lot of work before we can consider the prototype a wrap, it was great to be told that this first iteration was "too good to waste".

Kat gave us a full demo of the Radio TAG system, for which Sean recently published his overview of the specification work along with the and . The demo was a good preparation for the trial which started on Thursday evening, with Radio 1, Radio 4 and 5live listeners asked to try out the new technology for a few weeks.

It's very early stage research, but they seemed intrigued by the concept and excited about helping us develop it. We will have to wait until the end of the trial to know how it worked for them, but there were already a few nuggets in the conversation with our test adopters. One of them said "I often hear something on the radio that'd be of interest to my wife, and look at the clock to remember what time I heard it and what station I was listening to, so I can look it up on iPlayer later. If this did that for me, it'd be great" while another remarked "I'm not sure how unique it is. You can do that anyway with podcasts, can't you?".

After the show and tell, it was packing time. Akua, aside from researching for the News Companion project - looking into how news publications allow users to follow and store news stories online, has also been Move Co-ordinator for the big Prototyping Team office move from room 101 to 211. We asked if we could keep the office number, but apparently, having an "office 101" on the second floor of Henry Wood House would be too confusing. Oh well...

George felt like he was 14 again, with Akua standing over him (and the other hoarders in the team), forcing him to tidy his room in preparation for our office move. A lot of beloved ancient kit was reluctantly disposed of. What if we need a SCSI floppy drive? How will we cope without a half working prototype Freeview receiver from 2002?

Later on Thursday, George had a 6 monthly review with the ΒιΆΉΤΌΕΔ's chief scientist, along with the R&D general manager, which went well - some useful steers around what we might want to explore next, along with discussion around the 2014 Commonwealth Games. He then rushed to a meeting with strategists and engineers from . They were interested in developments in hybrid devices and RadioDNS prototypes. There was an interesting discussion around how hybrid services perform in a country with ubiquitous 100MB/s broadband, and a debate around whether broadcast still has meaning or utility in this concept.

Between boxes and demos, the News Companion team found time to make progress on their research. They kicked off the next phase of News development with the question "what does the desire and action of following the news mean to users and how can it support the management and consumption of content".

And with the pending move and the start of the TAG trial, the mood in the office in the last two days of the week was a mix of excitement and feverish expectancy. Everyone coped in their own way: Sean had fun streaming live video from a DVB-T card to display in a web browser video element using VLC, managing to overlay css-styled HTML on top of it; Kat and Akua listened to 80's music; Chris N patiently kept working on his synchronisation work for P2P-Next project, this week getting into the details of the and looking at the . To each their own.

Some interesting links:

  • Microsoft's and Andy Littledale's are two of the projects we are looking at for inspiration in the News Companion research
  • Yves found this and this enlightening. He also nominated a as interesting link for this week.
  • Sean nominated an article on ", , and found mouthwatering.
  • And finally, my suggestion this week: the from Wikipedia. Over a hundred dishonest ways to slither out of any debate: "Yes, of course, you make a good point, but surely you realise that your choice here suffers from the Semmelweis reflex?".

RadioTAG

Post categories: ,Μύ

Sean O'Halpin | 14:08 UK time, Thursday, 1 September 2011

RadioTAG is a new protocol that enables you to share information with a broadcaster about what you're listening to by pushing a button on your radio.

This article is about RadioTAG and how, over the past few months, ΒιΆΉΤΌΕΔ R&D has worked with a cross-industry team to design it.

We touch on use cases for RadioTAG, summarise how it works, look at the considerations we took into account and describe how we went about designing it.

For detailed technical information you can read a and download our from github.

Introduction

  • You're engrossed in an interview on the Today programme but have to leave for work. No problem. Press the RadioTAG button on your radio to remember your place. When you get to work, you resume listening on iPlayer from the point you left off.
  • On the beach, you hear a great song you'd like to share with your friends. You push the RadioTAG button on your mobile to tweet the track now playing and add it to your account.
  • You hear an advert for a well-known fast-food chain on commercial radio. You press the RadioTAG button on your radio to get a coupon (in the form of a QR code) emailed to your phone. You show this at the counter and enjoy a tasty, cheaper burger.

These are just some of the scenarios made possible by RadioTAG. RadioTAG aims to be an open standard that enables internet-connected radios to send the current time and station to a broadcaster's internet service, optionally store that data and receive back relevant information.

The protocol has been developed by the consisting of Andy Buckingham of , Robin Cooksey of and, from ΒιΆΉΤΌΕΔ Research and Development, Sean O'Halpin, Chris Lowis and Kat Sommers.

RadioTAG is an initiative under the umbrella of , an industry-wide initiative to "enable the convergence of radio broadcasting and IP-delivered services". RadioDNS is also the home of , a slideshow service specification we have discussed before on this blog and , an electronic programme guide that aims to enhance broadcast audio services with rich programme related metadata.

What is RadioTAG?

RadioTAG is a protocol that defines:

  • how a client finds the tag service corresponding to the current station using RadioDNS
  • the transport protocol used
  • the format of a tag request
  • how the service should respond to unauthenticated requests
  • how you pair a client with your account at a broadcaster's web site

Our motivation in contributing to RadioTAG is that we want to explore the additional services we can offer alongside our broadcasts. We've done some work on this in the past and know that audiences are keen to get more information on what they're listening to and to interact with the studio.

Commercial stations are interested in the same things but also in the opportunities they can offer advertisers in terms of attention data and direct response advertising. Click-to-buy, where pressing the tag button while a song is playing immediately purchases it, is another obvious possibility.

Device manufacturers are keen for RadioTAG to provide a rich "out-of-the-box" experience that works without you having to go to a website and fill in some forms.

Having a standardised protocol, broadcasters and manufacturers can implement applications and clients that they know will work together. Both sides benefit from the greater number of collaborating participants, so our audiences can enjoy a richer radio experience.

Overview of the protocol

The idea behind RadioTAG is simple: a client sends a request to a broadcaster's tag service, specifying a time and station. The tag service responds by sending a tag entry containing relevant metadata, for example, the most recently played track or more information about an advertisement.

Depending on the level of service provided, the tag data may be stored on the server and may be viewed on the client or on the web or be used for another application.

There are three levels of service a tag service can make available:

  • anonymous tagging
  • unpaired tagging
  • paired tagging

Pairing in this context means associating your radio with an authenticated user account on the broadcaster's web site.

The levels of service are distinguished by whether or not tags are retrievable on the device or on the web and by whether you need an account on the broadcaster's web service. The table below summarises the differences:

Level of serviceΜύ Tag list on deviceΜύ Tag list on webΜύ Account neededΜύ
AnonymousNoNoNo
UnpairedYesNoNo
PairedYesYesYes

These services can be offered in a number of combinations. For example, a broadcaster may offer anonymous tagging by default which can be upgraded to paired tagging or it may support tagging out of the box (unpaired) with no option to pair the device to a web account.

To prevent casual spamming of services, the protocol mandates a series of "grant" exchanges on the way to getting an authorisation "token" (in a manner similar to the ) to allow access to the unpaired services or to pair the device with an authenticated account.

The approach we took to the work

Our approach was to strip RadioTAG down to its core function: enabling an internet-enabled radio to record a station and time on an internet-based service and to retrieve the information it has recorded.

We left out geotagging, text input, ratings, etc. as we felt they would be distracting and could be added in a later version of the spec if necessary.

Then, with frequent iterations of modelling and discussions with our partners, we gradually settled on the design.

Design constraints

One of the first outcomes of our collaboration was agreement as to what constraints we needed to recognise and accommodate in the design of the protocol:

It must work out of the box

We started with the position that no data was to be stored except against an authenticated user account. This unsurprisingly turned out to be too strict a position. During an open day with device manufacturers, it became clear that an unregistered "out-of-the-box/ Christmas Day" experience was essential to the adoption of RadioTAG.

User input is limited

We were also mindful of devices with limited input capabilities. Simple radios may have only a limited display and a jog-wheel for entering text. This makes entering your user name and password quite fiddly. At the same time we wanted to make the protocol as secure as possible so your tags, and any information they contain, are available only to you.

Input on the device is limited to numbers only and anything we ask the user to type in on a website must be no more than 8 characters. This means we can't ask the user to type in a 48 character UUID for example.

Resources are limited

Mass market consumer goods such as radios are designed to minimise the total cost of components. Hence, RadioTAG should strive to require as few resources as possible both on the client and the server:

  • The protocol should require minimal persistent state on the device. Most devices are extremely limited in memory budget so we needed to be careful not to waste space. We ended up needing to store only the service name and current token for that service.
  • The web service should be resistant to unauthenticated resource exhaustion attacks ('drive-by spamming').

Specify 'how' not 'what'

We agreed that RadioTAG should not specify what a tag means nor determine its content.

The protocol doesn't define the content of the tag data because it can be anything the broadcaster wants. However, the device needs to be able to display that data so we settled on the Atom format as the common way of structuring the returned data. As this is an extensible format, we can add radiotag namespaced attributes or elements as required.

It must be secure

We were particularly concerned to make sure that registration was secure. As a group we agreed the following:

  • You must be able to revoke authentication both on the device and on your web account. This is to address the situation when a radio is lost, stolen or sold on without being unpaired.
  • Data stored on the server must be secure.
  • Don't rely on a manufacturer supplied client ID as they are not guaranteed to be unique and can be spoofed.

Separate authorisation from API

We must support the ability to use separate servers for issuing credentials and for serving API calls.

Design principles

With these considerations in mind, we developed a set of design principles for RadioTAG:

Follow standards where possible, e.g. OAuth, REST

Taking inspiration from the , we decided to base our authorisation around the concept of tokens, which are practically unguessable arbitrary sequences of letters and numbers. What you are allowed to do with a token depends on its scope. For example, if you have a token with the scope "unpaired", you are able to create tags and view them on your device but not on the internet. If you have a token with the scope "paired", you can create tags and store them against your web account.

Another concept we borrowed from is the issuing of grants, which confer the right to request something from the service. For example, in the RadioTAG protocol, you cannot request a token for creating unpaired tags unless you have received the "unpaired_grant" in a prior request.

This tells the device what options are available to it, for example whether unpaired tagging is available or whether it can register with the web service. This reduces the amount of state the device has to remember.

A useful side-effect is to introduce a little rigmarole into the acquisition of the unpaired token to dissuade casual hackers from swamping our tag service with token requests. It won't stop someone who has read the protocol and can implement it, but it will cut down on the "drive-by spamming".

Limit user input

Any entry on the client must be no more than 4 digits (i.e. a PIN) and entry on the web front end should be no more than 8 characters.

The API should be consistent

A clear and consistent API helps developers quickly get up to speed. For example, we decided against optional parameters for version 1.0 to avoid any ambiguity about what was required in an implementation.

Throughout, we use headers to return grants, tokens and service metadata from the tag service.

Also, we use form data for parameters rather than query parameters so we don't expose tokens in logs.

How we designed the protocol

Playing RadioTAG

We started by playing the "RadioTAG game". Chris and I pretended to be the client and the service. We wrote down API requests and responses on sticky notes and passed them back and forth. This seemed to amuse George no end and Vicky was pleased to see we were doing . The result was our first specification:

The RadioTAG game

The RadioTAG game

Sadly, the world is not yet ready to accept such fragile constructions as valid specifications, so we converted it into a using the excellent .

To give you a flavour of how the actual protocol we designed looks like as a message sequence diagram, here is a diagram showing the exchanges between client and server from the point when you tune your radio and press the tag button to when you pair that radio with your web account:

The RadioTAG protocol in summary

The RadioTAG protocol in summary

Simulating RadioTAG

A common mistake in software design is considering only the 'happy path', i.e. what happens when everything works the way you hope it will. We wanted to make our protocol robust in the face of events such as entering the wrong PIN number, losing your radio or someone managing to steal the registration key before you finished registering. To study the ramifications of these situations, we decided to build a software model to simulate what would happen in those cases. This turned out to be key in the development of the protocol as it enabled us to quickly test various scenarios and try out variations of the protocol.

Our model consisted of a client, a tag service, an authorisation service and a web front end. We wrote this in Ruby using , a library designed to manage asynchronous network communication. We had originally tried to write the model using simple synchronous method calls but this soon became unwieldy. Using the allowed us to model each entity as an independently executing process, just as it would be in a real system.

We modelled the authorisation service separately for two reasons: the first is that we needed to ensure that our proposed specification for RadioTAG would work with system architectures which have separate authorisation services such as we have with ΒιΆΉΤΌΕΔ Identity; the second is that we wanted to avoid what is , namely that it conflates the service API with authorisation.

It soon became apparent that with suitable instrumentation, the model could automatically generate the message sequence diagrams we had up to now been creating manually.

With the addition of a simple scripting capability, we were able to explore many different possible situations. The example below shows the script for the unpaired to paired scenario:

title Scenario 1: Unpaired then paired
Alice :tunes her Radio to "ΒιΆΉΤΌΕΔRadio4"
Alice pushes the :tag button on her Radio for the first time
Alice pushes the :ok button on her Radio
Alice pushes the :tag button on her Radio again (showing that unpaired tagging works)
Alice pushes the :register button on her Radio
Alice :logs_into ΒιΆΉΤΌΕΔWFE with password "alice123"
Alice :registers her Radio
Alice enters the :pin into her Radio
Alice pushes the :tag button on her Radio to now tag directly to her account
Alice pushes the :ok button on her Radio

Here is an extract from the diagram generated from this scenario:

Extract from the RadioTAG model diagram

Extract from the RadioTAG model diagram

Reference implementation

Once we were happy that we'd covered the most important use cases and addressed possible attacks, we used the model as a basis for building a reference implementation consisting of three subsystems: a , an and a .

We built these in our usual stack of Ruby, Sinatra, DataMapper, MySQL and Passenger under Nginx using test-driven development and pair-programming.

Console radio

After building the web services, we needed something to try them out, so we wrote a that pretended to be a radio. With the addition of recording user actions, playback and HTTP tracing, this also proved to be a useful tool in generating documentation. The HTTP traces in the API document were generated from this client.

Collaboration

Armed with these tools for generating documentation from code, we were able to share our progress with our collaborators Andy and Robin, who provided us with invaluable feedback and guidance.

With regular visits from both Robin and Andy over the course of a number of weeks, we addressed many fine points, ironing out the last wrinkles in the protocol.

The high point was when Robin got a basic client working on a development system which could send a tag to our service. There's nothing like seeing a physical device doing what you'd only previously seen working in software. The idea become reality.

Draft specification

The result of this work is our . We are submitting this to the RadioTAG application team for discussion and ultimately ratification, before onward submission to the RadioDNS board with a view to adoption as version 1.0 of the protocol.

Source code availability

You can find all the source code we developed as part of the project on the .

What we've learnt

Designing protocols is hard! We've had to accept compromises to accommodate what the architect Christopher Alexander calls the at work and many times subtle points arose that caused us to scratch our heads.

Having a model we could easily adapt to test scenarios meant we were quickly able to probe the consequences of what was a major change to the protocol.

The outcome certainly wouldn't have been possible without the collaboration, input and scrutiny of all four of us, each coming from a different angle and each making essential contributions.

More from this blog...

ΒιΆΉΤΌΕΔ iD

ΒιΆΉΤΌΕΔ navigation

ΒιΆΉΤΌΕΔ Β© 2014 The ΒιΆΉΤΌΕΔ is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.