Βι¶ΉΤΌΕΔ

Research & Development

Posted by Stephen Perrott on , last updated

The Βι¶ΉΤΌΕΔ Research & Development Distribution team provides . Test content like ours is extremely important in testing HTTP based adaptive bitrate systems such as MPEG DASH and HLS where the client controls the presentation of the stream based on a 'manifest' retrieved from the content provider's server. Incorrect interpretation of the manifest can lead to viewers getting the wrong thing (for example audio description when they didn’t want it), and erroneous requests can create problems for serving infrastructure or even other clients.

Below are details of some recent enhancements that we’ve made to , including adding live versions.

We now have these variants:

  • A one-hour VOD stream
  • A standard live stream mimicking a continuous simulcast channel
  • A low-latency live stream
  • A set of webcasts – live streams that have a beginning and an end

A screenshot of the test card that can be viewed on the stream.

CMAF

The media has been updated to be fully conformant to  – the Common Media Application Format published by MPEG (The Moving Pictures Experts Group – an ISO and IEC joint committee). We were closely involved with the development of the CMAF specification and are keen to promote the interoperability it offers. CMAF allows media to be interoperable between  (Dynamic Adaptive Streaming over HTTP) and  (HTTP Live Streaming) streaming protocols. The Βι¶ΉΤΌΕΔ uses MPEG DASH, and specifically the , for the majority of our streaming clients, and uses HLS to stream to Apple devices. With this change, we have included both MPEG DASH and HLS manifests, referencing the very same media segments.

More about the live streams

The  are designed to test clients’ abilities to make requests within the advertised segment availability times – early or late requests are clearly identified in the media segments served. Incorrectly operating clients can increase server load and pollute caches with 404 (not found) responses, which could further impact other clients. Most existing test content doesn’t check for this type of client-server interaction, meaning problems can only be detected by analysis of server logs or intercepting network traffic.

An in-vision warning showing that a segments has been requested late.

Segments requested at the wrong time have an in-vision (or sound) notice.

In-vision and in-audio times (in GMT) allow the user to see the latency being achieved through the delivery chain by comparing these to wall clock time. The streams have availability times such that media becomes available as if it had no encoding delay (but the server still assembles complete segments before making them available).

Low latency

We continue to research low latency streaming, both working with standards bodies and creating prototypes. Low latency techniques allow streaming to achieve a latency which is similar to broadcast, with latency meaning the time between something happening and the audience seeing or hearing it.

We have now created a new  of our live test card stream. This uses the same long GOP (Group of Pictures) encoding, but with each segment formed of 4 CMAF chunks. The MPD (DASH Media Presentation Description) indicates that the client can request segments early, and if it does this the server will send each chunk as it becomes available. Within each MPD is a Service Description element which sets the target latency for the client, as well as bounds on playback rate when catching up to that target. We have set the target to a value which we think is achievable under typical network conditions by clients optimised for low latency playback. When combined with an encoding delay, this would allow DASH streaming to achieve parity with broadcast latencies. We serve the streams through a CDN, mirroring the distribution we use on our main services to deliver to millions of simultaneous viewers. As with the other streams, in media time announcements allow latency measurement against wall-clock time as well as confirming synchronisation between different components.

Webcast streams

In addition to our live streams are a set of '' or '' . We have generally found that there isn’t much content around which can be used to verify clients behave correctly with live streams that have a start and end. This improves the audience experience by ensuring a clean start and end to playback, as well as avoiding multiple re-requests for segments that don’t exist. Our new streams are advertised 30 minutes in advance with a manifest showing an availabilityStartTime in the future. This manifest initially describes a continuous stream, without a finite duration, but as the stream approaches the end, the manifest updates to include a duration and inband events are inserted into the media, signalling a manifest update. Properly implemented clients follow this signalling and stop at the signalled point. Media requested after the end of the stream will be served, but will clearly indicate in the presented media that it shouldn’t have been requested or played. There is one webcast starting every 5 minutes and they last for just over 12 minutes.

Languages and access services

Finally, we created new representations with audio and subtitles in different languages as well as those containing audio description. We have added all of these to some manifests - giving a full list of options to aid the development and testing of clients with track switching capabilities. Other manifests have a single language for clients without those abilities.

We believe these updated test streams provide both a useful testing resource for the industry to improve interoperability, and a demonstration of the capabilities of the streaming protocols.

Topics