Â鶹ԼÅÄ

« Previous | Main | Next »

The Challenges of Adaptive Streaming

Post categories: ,Ìý

Rosie Campbell Rosie Campbell | 11:30 UK time, Wednesday, 12 September 2012

There are few things that are universally hated (always a fun start to a blog post) but I would be willing to bet that the experience of trying to watch internet videos on a temperamental network connection is one of them. I wonder how many hours I've wasted watching that little spinning buffer wheel go round and round and round…

Well, the good news is that the Connected TV team at Â鶹ԼÅÄ R&D is working towards making those ‘insufficient bandwidth’ messages a thing of the past.Ìý One technology that is helping us get there is .

Adaptive streaming works by encoding the same media file at a number of different bit-rates, which produces multiple ‘representations’ of the content, each at different qualities. As the quality of the internet connection varies, the streaming client can switch between the different representations to provide a smooth viewing experience. When the connection is good, a high bit-rate representation will be requested, resulting in a good quality picture. If the connection worsens, the client will request a lower bit-rate, resulting in a slight decrease in picture quality but a better overall experience since it avoids freezing and rebuffering.

Adaptive bit-rate streaming. Source: Wikipedia user Daseddon

Adaptive bit-rate streaming. Source: Wikipedia user Daseddon

The Â鶹ԼÅÄ is already using adaptive streaming (e.g. for Wimbledon and the Olympics), however there is still work to be done to understand how the technology can be optimised for different network environments. This is where I come in - for my second trainee project I am investigating the characteristics of residential broadband networks, and looking into how we can produce and test different algorithms that decide which representation to request based on the network environment. It might sound trivial – surely you just check the download speed and request the maximum quality representation the connection can cope with at that time?!

Alas, if it were only that simple. As it turns out, both determining the download speed and deciding which quality to request are more complicated than you might imagine…

Networks can suffer from short-term fluctuations; in particular Wi-Fi is notorious for brief losses of connection (). Although the connection will quickly recover, if the measurement happens to take place during the fluctuation it can give an inaccurately low value for the available bandwidth, and, if the algorithm doesn’t account for this, an unnecessary shift to a lower bit-rate.

The behaviour difference between wired and wireless networks also causes more fundamental problems. , the protocol responsible for reliable data transmission over the internet, was designed with wired networks in mind. In wired networks, packet loss usually indicates congestion on the network, so TCP sensibly decreases the amount it is sending in order to reduce the network traffic. Unfortunately, TCP behaves exactly the same on wireless networks, where packet loss is far more likely to be just a random occurrence and no traffic reduction is necessary.

Bearing all this in mind – the streaming client needs to decide which representation to choose. Should it aim to maximise quality or minimise bit-rate changes? If the level of quality keeps jumping around all over the place it might be more irritating than watching something at a slightly lower but more consistent quality.

The size of the buffer may influence the choice. It might be a good idea to download some video ahead of time and store it in a buffer. If there is a lot of data in the buffer, you can risk waiting out short drops in bandwidth and maintaining a higher quality as video in the buffer will see you through. However, larger buffers mean slow start up times and slow reaction to controls like skipping, as you have to wait for the buffer to fill. You don’t want to be flicking through the channels and have to wait four seconds or so for anything to appear each time you change! Another disadvantage is that the more you buffer the further you are behind the ‘live’ stream – and you really don’t want to be in the frustrating situation of hearing your neighbours cheer for a goal you then know you’ll see in a few seconds time!

Then of course we have the problem of competing users. If there are multiple people watching adaptive video, currently the streaming clients . Either the quality oscillates wildly as they alternate bandwidth allocation, or one client ends up hogging all the bandwidth leaving the other with very poor quality. Ideally, we would like them to even out fairly and consistently.

So how can we make sense of all this to try and produce a more effective algorithm? I'm working on creating models of typical residential broadband networks over which we can test different adaptive algorithms. This involves using network simulation software to create topologies based on typical residential network characteristics. We can then simulate running different algorithms over these network topologies and output a range of metrics to allow us to assess their performance. This is likely to involve both statistical data (such as the likelihood of freezing/rebuffering, number of bit-rate shifts and the average quality) and graphical data (comparing how available bandwidth and requested bit-rate vary over time, to see how closely the two match).

This framework will allow us to assess existing algorithms and use the results to inform the development of improved algorithms which can then be refined in a similar way.

You might be wondering why any of this really matters to an organisation whose primary delivery method is through conventional broadcasting. Although currently IPTV is used mainly for accessing on-demand content, can’t be ignored, and there is an emerging trend that could see delivery over IP applied to a much wider range of services. As well as on-demand viewing, IP delivery enables viewers to watch content on the go on mobile devices, and allows access to a potentially much larger selection of content than can currently be broadcast through conventional means. With over 106 million requests for online Â鶹ԼÅÄ Olympic video content, it's clear that the public is embracing IP streaming.

As more and more TVs and set-top boxes come with internet connectivity, it is likely that what you watch from your sofa will be increasingly delivered over IP. And, just like an effective adaptive algorithm, we hope to make the transition as smooth as possible.

Comments

  • Comment number 1.

    Rosie

    You vastly over-simplify the problem, IMHO.

    In delivery of content from source to consumer, there are many aspects that must be included, not just the last mile tail. In what is essentially an un-signalled, "best-efforts" network, The Internet, you are never able to guarantee anything, and no amount of tinkering with the protocol, adaptive streaming, etc, etc, will fix that.

    As well as the DSLAM/head-end and last mile contention issues, there are other areas that you have to consider; the IX links, the server aggregation, even the disk I/O to the content store. Essentially, today's Internet is the bog-standard postal service - you have ZERO guarantee that your letter will EVER arrive when you drop it in the post box.

    The way that most people "work around" these issues today to stop the buffering is to throw bandwidth at the problem - we buy bigger and bigger tail circuits to out-run the bandwidth constraint. But, this is false economy, as it is NOT at the last-mile where things are inherently congested.

    Think back to the "0bit/s CIR" Frame-Relay circuits sold in the late 80s - all the time the core was not congested, you could buy a contract for nothing and get an acceptable throughput. The hens came home to roost when the SPs oversold the core and the investment didn't go in - when customers complained that "my thorughput is tending to zero" they were told "you bought ZERO !!! Be Happy !"

    So, if I want to watch an HD video at 3.4Mbit/s, then all I really need is a 3.4Mbits + overheads connection FOR THE DURATION OF THE VIDEO. I don't need anything else. Every application that is "on" at that time will just require what b/w it needs. I don't NEED to pay for a 10/20/50/100 Mbit/s tail pipe. It is just waste. But my video has different requirements of the network than the VoIP call I'm on (102kbit/s queued at EF class for a G.711 call on PPPoA) or my email (don't give a monkeys - it can wait).

    BUT, and here is the key, when I want that 3.4Mbit/s tail, I need it from my PC/Mac/whatever, at a guarantee delay, jitter, latency, QoS between the content source and the consumer. The session needs to be admitted, end-to-end. This can be done - protocols such as RSVP and the DiffServ/IntServ model will allow that, but there has to be intelligence across domains to reserve, police and admit that session.

    Think of an analogy. We all know what happened in the 60s and 70s with the road network - successive governments threw money at it, and they could not keep up with demand. The result - a hugely congested and wasteful network. But look at the railways - very little investment in track over the 150+ years, but vastly improved THROUGHPUT, primarily due to signaling. Look at the Jubilee Line that was moved from lamp signals to moving block - the throughput on the same track was increased four-fold.

    And how about MULTICAST !!!! Again, this has the ability to vastly reduce the amount of redundant traffic out there, but virtually no ISP has adopted it, and there are few applications that use it.

    The key to this is to realise that what was built in the Internet is not fit for purpose - it was designed and built in the US to support text-based, best-efforts communication for the military, and we are trying to use it for real-time, delay sensitive applications for consumers. It needs a drains-up redesign.

    Coupled with that will be a change in the way people pay for the service - why should I pay for an last-mile tail when what I really want to pay for is the service ? That is the same with all other transport networks (rail, road, post, electricity, water), where I have a contract with the supplier of the CONTENT PROVIDER, not the transport provider.

    From my user profile, you can probably work out where I work, and trying to solve this problem is my day job. But, WE (and I !!!) need content providers such as the Beeb pushing this with the regulators and politicians. In fact, HDS and aligned technologies may make things worse, not better - with RTMP, at least I can DPI or classify based on port number. With HDS, it's just HTTP - I don't know whether this is a video that needs to be queued at AF42, or just a web page that needs to go at Best Efforts.

    (BTW, you need to think carefully about TCP - global sync on TCP plays havoc with throughput. SCTP is probaby where we want to go, IMHO)

    We need to join the Application, Network and Client together. The Client needs to be able to signal the Network to deliver a transport connection to it's requirements, much like you will decide whether to send a letter First, Second, Registered or Recorded (and pay accordingly).

    Regards

    David Lake

  • Comment number 2.

    Thanks Rosie - I was asked by my son 9 year old son how tv/youtube/bbc iplayer /iplayer/ etc gets into my phone and why the picture is not the same as TV.

    Would you say that adaptive streaming applies to mobile handsets and if so what's the differentiator between my handset and my partners (both on the same network - - which has in my honest opinion -and my area - a second-to-none mobile broadband/internet data rate) and both of comparable specification but his picture is better (i believe the screen resolutions are the same).

    Do you know if certain phones handle streaming with data buffers of varying size. Apologies for the novice questions - want to provide a well thought-out answer that's easy to understand.

    Hope you can help.

    Kind regards
    Suzi

Ìý

More from this blog...

Â鶹ԼÅÄ iD

Â鶹ԼÅÄ navigation

Â鶹ԼÅÄ Â© 2014 The Â鶹ԼÅÄ is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.