5 Factors in Choosing WebRTC vs HLS

Super fast stream of information using WebRTC, UDP
SHARE

When it comes to choosing WebRTC vs HLS, which protocol delivers the best live streaming experience? Making the correct decision is imperative since the protocol determines how quickly the encoded video data will be transported across an internet connection. WebRTC is a better choice. If that’s all you wanted to know then feel free to… Continue reading 5 Factors in Choosing WebRTC vs HLS


When it comes to choosing WebRTC vs HLS, which protocol delivers the best live streaming experience? Making the correct decision is imperative since the protocol determines how quickly the encoded video data will be transported across an internet connection.

WebRTC is a better choice. If that’s all you wanted to know then feel free to stop reading and go build your live streaming application using WebRTC. However, if you want to know why it’s the best choice — we’re betting you do — then read on.

After further analysis, the fact checkers at Red5 Pro have come up with five major factors that you should consider when choosing protocols that Wowza also happened to mostly get wrong: latency, scalability, multi-device compatibility, performance in poor streaming conditions, and security. Let’s dive into those details starting with arguably the most important aspect in live streaming: latency.


Latency

The “live” part of live-streaming hinges mostly upon latency. With the exception of VOD-type applications, a large majority of live streaming use cases require a real-time latency of under 500 milliseconds. Anything above that causes a noticeable delay between when the video is captured and when it is seen by a subscriber. Effective live streaming creates interactivity merging the flow between participating in real-world events and experiencing those events in a virtual capacity.

HLS was built on the long established and deeply entrenched HTTP infrastructure leading to the widespread use it currently enjoys. This aging infrastructure is also why HLS results in anywhere from 10-40 seconds of latency.

However, there are ways to modify HLS to decrease the latency. Apple has their own Apple Low Latency HLS (LL-HLS) implementation which is similar to the open-source Low-Latency HLS (LHLS) which both reduce the latency down to around two or three seconds. Though they decrease latency, neither enjoy the widespread compatibility of standard HLS. Furthermore, two to three seconds is still way too high.

Looking to increase the compatibility of LLHLS, in early 2020 Apple announced that it dropped the HTTP/2 push requirement. Thus, it looks like the overall HLS spec will eventually support around 3 seconds of latency. While still not real-time it’s certainly better than 40.

As a UDP based protocol built to fully adapt to the modern internet, WebRTC supports 500ms of real-time latency meaning it is currently the only widely supported protocol that can provide real-time latency.


Scaling

Scaling WebRTC is a little bit harder than HLS. However, that does not mean that it can’t be done especially considering that it has been done.

One such example of successful WebRTC based scaling comes from Microsoft.

In August of 2016 Microsoft acquired Beam, a WebRTC based approach to live game streaming intended to solve the latency issues and provide a better experience over the Twitch platform. A year later, Microsoft changed the name to Mixer. Though they ultimately shut down their Mixer game streaming platform it was due to not being able to attract enough people to it, rather than being able to support a large number of users. The popular gamer Ninja had one stream on Mixer that attracted over 85,000 concurrent viewers and 2.2 million viewers over an 8.5 hour stream.

The initial difficulty in scaling WebRTC stems from the fact that it creates peer-to-peer connections which can consume a great amount of CPU resources. When your hosting provider uses fixed data centers – such as a CDN – meeting that increase means physically adding additional servers or increasing the server capacity. This could be a problem if you hit a higher than anticipated demand, or if you just need a little extra capacity as you could end up paying for a much larger server than you need.

Of course, the main draw to using a CDN in the first place is that the CDN provider will take care of the scaling for you. The problem though, is that CDNs use HTTP to scale, and that comes with a tremendous amount of latency.

This is why you need a clustering solution that works with WebRTC as a protocol. Even better if it can autoscale with cloud infrastructure. This kind of Autoscaling Solution, involves switching from the static, datacenter-based CDN model, to a much more flexible cloud-based model. Server clusters can be set up to dynamically spin up new servers as network traffic increases and spin them back down once they are no longer needed. This alleviates the potential issue of paying for more capacity then you really need.

Red5 Pro’s WebRTC supported Autoscaling Solution works by publishing a stream broadcast to an origin server. Subscribers requesting access to view the broadcast stream, connect to a separate edge server which is then connected to the correct origin server through a stream manager. This architecture allows multiple edges to connect to the same origin server. Thus multiple servers can handle as many connections as needed and all connect to the same broadcasting stream. If the origin hits capacity in the number of edge servers it can connect to, relay nodes allow the origin to connect with several groups of edges. Thus the system will keep spinning up new origins and edges to handle as many publishers and subscribers as needed.

In fact, this dynamic scaling model is similar to how Mixer built their solution. However, Mixer used bare-metal servers which would not be as flexible as a strictly cloud-based solution.  If you really want to dive deep into the Red5 Pro approach to cloud based scaling of WebRTC we recommend reading our white paper on the subject.


Multi-Device Compatibility

Ensuring that your application can run on a variety of devices is certainly important. Whether it’s a mobile, laptop or tablet, you need a full complement of browsers and platforms supported.

HLS has native support for mobile browsers (iOS-Safari and Android-Chrome). The only desktop browser with native support is Safari. Everything else requires a custom player implementation written in JavaScript. While there are plenty to choose from including commercial offerings like JWPlayer, there are also open source solutions like hls.js. However, as of today, there are very few players who’ve updated to support the new Low Latency HLS protocol that Apple has introduced.

As a new web-standard, WebRTC is fully supported by the latest versions of all major browsers: Chrome, Safari, Firefox, Edge and Opera. Plus it can be run natively in the browser without the use of a plug-in. That includes mobile browsers as well for iOS and Android. Of course, creating dedicated mobile apps with the use of Mobile SDKs is good too.


Performance in Poor Streaming Conditions

In terms of quality and performance, LL-HLS and WebRTC have similar features as they both can support transcoding and Adaptive Bitrate (ABR).

ABR allows the client to request a lower bitrate that is more appropriate to the connectivity they are experiencing at that moment. That will ensure a smooth connection despite poor connectivity. HLS and it’s newer cousin LL-HLS both have ABR built right into the spec. This is accomplished by a master manifest file which contains the variants. When the player detects that the video isn’t being delivered quickly enough and thus detects insufficient bandwidth, then it can simply request one of the lower stream variants in the manifest. It then starts downloading the new video segments at the lower bitrate.

With WebRTC things are quite a bit different. In WebRTC you have a single UDP connection where the delivery of video is over SRTP. This means you can’t go and make new requests for different segment files since there are no segment files to begin with. Instead the approach is to make available the multiple bitrate variants at the edge server allowing for the client to request the correct quality of video. The request itself is over the RTCP channel, which is a bi-directional control channel for sending live information about the state of each peer in a WebRTC session. The specific message we listen for is REMB which contains the recommended bandwidth that the peer is requesting (in this case the subscriber client). Based on that information, the edge server node can then respond by shifting to delivering the best stream for the bandwidth requirement.

As a side note, both HLS and WebRTC can rely on live transcoding of the streams to generate these multiple bitrate variants. Transcoding splits the stream into a variety of quality ladders (For example: high, medium, and low) so that users who can support the highest quality can subscribe to it, while users with poorer connections can still watch.

While HLS is limited to ABR, WebRTC has additional features further improving quality and performance.

Given that WebRTC is a UDP based protocol, one of its most critical features is NACK, which is a method of resending critical packets. A bad network connection will likely result in the client dropping packets. Rather than trying to resend each and every one of the packets, NACK identifies the ones that are most important and resends those. This prevents the network from getting further overloaded with redundant requests. This will help keep the stream flowing and looking good even under poor network conditions, and doesn’t have the drawbacks of packet backups in TCP based systems. Again, like REMB, NACK is a message type sent over the RTCP channel to the Edge server. The edge server then is responsible for re-delivering that critical packet. WebRTC also supports many other strategies for keeping stream quality high and ensuring efficient delivery of video including FEC, FIR, and PLI, which also happen to work over the  RTCP channel.


Security

Making sure that your data and streams remain protected is important as well. Preventing unauthorized users from creating streams and encrypting them so they can’t be intercepted ensures that sensitive information doesn’t leak out.

As mentioned earlier, LL-HLS will be wrapped into the HLS spec. That means that security features available with LL-HLS such as DRM, token authentication, and key rotation will be implemented. However, those extra security features will have to wait until providers can configure them in their systems. Waiting on someone else for your security can be an issue.

HLS does have one prominent security feature in that it can be encrypted. WebRTC is encrypted by default meaning your streams are free from hackers gaining illegal access to your streams. Furthermore, features such as user authentication, file authentication, and round-trip authentication, will further secure your streams.

In regards to DRM systems, for many circumstances the basic security provided by WebRTC is more than enough to protect your data. That means that content owners and distributors can safely forego the costs and hassles of contracting for DRM support if they have the legal latitude to do so.

While it’s clear that WebRTC fosters a better, more interactive, live streaming experience, HLS remains a solid option for streaming; just not live streaming. WebRTC has faster latency, better browser compatibility, and enhanced security features over HLS. While scaling WebRTC can be a little more challenging, that concern is easily surmounted by using a cloud-based autoscaling solution such as Red5 Pro.

Curious about finding out more about WebRTC and how it can improve your live streaming? Send a message to info@red5.net or schedule a call.