Why Periscope’s “Low-Latency” HLS Is Still Too High

SHARE

How low can you go? Not low enough. At least with HLS that is. The engineers over at Periscope have described their process for getting HLS latency down from 10 seconds to between 2-5 seconds using a technique they call Low-Latency HLS (LHLS). Two seconds? I mean that’s kinda low. Sort of like your friend… Continue reading Why Periscope’s “Low-Latency” HLS Is Still Too High

How low can you go? Not low enough. At least with HLS that is. The engineers over at Periscope have described their process for getting HLS latency down from 10 seconds to between 2-5 seconds using a technique they call Low-Latency HLS (LHLS). Two seconds? I mean that’s kinda low. Sort of like your friend who does an "awesome" Darth Vader impression. It’s all well and good but there’s only one James Earl Jones.

As we discussed in our last post, there are a great number of low-latency applications, and the demand for them is always growing. The basic assumption behind streaming a video is that the subscribing device wants to receive every single data packet sent out. Depending upon a variety of factors (network, hardware, etc.) this won’t always happen so any missed packets will need to re-transmitted.

HLS, or HTTP live-streaming, deals with this by allowing the client to select a variety of quality streams depending on the available bandwidth. The broadcasting device encodes the stream into three channels with different quality settings. The stream is further segmented into smaller sections which are then queued for playback. As the network capabilities fluctuate, the subscribing device is able to load the best segment for the current conditions. Normally, those three segments would need to be cached before the stream can begin flowing to the client. However, Periscope employs a faked prefetch to get around that.

Nonetheless, that still leaves the fact that HLS uses a stateless HTTP connection. Rather than creating a fixed connection, HTTP is designed to fetch something and close the connection after it’s done. This redundant process of opening and closing sockets over and over again causes extra overhead and with that overhead come, you guessed it, latency. So that still leaves HLS an inherently slow protocol. Rather than waste time optimizing, shouldn’t you just start with a faster solution?

Enter WebRTC; a platform built around reducing latency to the bare minimum. Forget going to the ground-floor, we want to go lower into the basement. Instead of breaking up a stream into channels and then further dividing into separate chunks, WebRTC simply deals with the dropped packets themselves. Tsahi Levent-Levi explains how:

The difference here is that there is no retransmission (or at least not in the same sense). If a packet of media isn’t received, the device needs to manage things without it. This is done either by using Forward Error Correction or simply by Packet Loss Concealment – estimating to some extent what should have been in the missing packet.

If there’s not enough bandwidth available, or there’s more – WebRTC simply increases or decreases it to match, encoding the video stream in real time to fit to the available bandwidth.

The end result? We’re down from 5 to 30 seconds of latency to a few hundred milliseconds only.

Now we can get this low-latency party started. At Red5 Pro, we’ve made the set-up much easier by integrating WebRTC into our platform. So instead of struggling with the WebRTC API, you can insert our browser-ready HTML5 SDK or iOS and Android enabled mobile SDKs straight into your application. We’ve already set up the tables and chairs so all you need to do is bring the people (and maybe some chips and salsa). Send your RSVP’s to info@red5.net or give us a call. No one likes waiting around, so get in touch today!