3 Problems with CDN Video Streaming and How We Solved Them

SHARE

With video content currently accounting for just under 70% of all internet traffic (and creeping upwards), video streaming has never been more important. At the moment, much of that content is managed by Content Delivery Networks (CDNs). However, there are many shortcomings when it comes to CDN video streaming of live content. Since CDNs require… Continue reading 3 Problems with CDN Video Streaming and How We Solved Them


With video content currently accounting for just under 70% of all internet traffic (and creeping upwards), video streaming has never been more important. At the moment, much of that content is managed by Content Delivery Networks (CDNs). However, there are many shortcomings when it comes to CDN video streaming of live content.

Since CDNs require you to funnel all your content through their data networks, some streaming providers have found that they need to use multiple CDNs to reach different regions. That means additional complications from managing different systems, fragmented streaming, and even higher latency from adding more connections to deliver the stream.

This has driven many in the live streaming market to start switching over to multi-CDN solutions.  In fact, it is predicted that the multi-CDN market will grow to $24 billion by 2025. Although a mult-CDN addresses some of the issues with a single CDN network (regional availability, price, etc.), in reality it is only a stop-gap solution for live video streaming. A pure WebRTC distribution service is now the best way to create real-time live streaming.

As such, pure CDN solutions are on the way out, at least when it comes to live video distribution. Here are three reasons why:


Latency

Built on HTTP architecture, CDNs are simply not equipped to handle the transport of dynamically updated content such as live videos. They work by caching data in regional data centers allowing for the efficient delivery of large amounts of data. This design focus on throughput and scalability results in a network best suited for handling static objects such as websites or pre-recorded videos.

Caching affects latency, which matters less with delivering static elements such as webpages and VOD. As live video experiences become more interactive, it means that they increasingly depend almost entirely on low latency delivery. Even a latency of just a second will negatively affect the user experience and utility of your application. It just can’t be live if it’s not streamed in real time.

To solve this latency issue we need to turn to a new protocol: WebRTC. WebRTC was designed around low latency streaming. It can deliver live video with an end-to-end latency of less than 500ms. This is much faster than HLS delivery which, even when modified, can only get down to 2-3 seconds at the very lowest. Accordingly, pure WebRTC services are projected to grow from around 1.2% to 8.3% of total Multi-CDN traffic.


One-Way Streaming

Beyond high latency, CDNs are really designed around distributing data out to clients, rather than receiving information back. As live experiences have become more interactive integrating features such as Zoom calls, co-viewing and fan wall experiences into those events, the inability to stream content in multiple directions is a major detriment to the utility of CDNs.

Each server in the CDN is essentially used as an ingest point which pushes the stream to the CDN for delivery at scale. This means that it works well for distributing data out from the origin to the edges, but not so well for streaming information in the opposite direction (back from the edges to the origin). Under this architecture, two-way communication is not efficient since a CDN is best for broadcasting single streams that will just be watched by subscribers rather than a two-way chat where the subscriber is also broadcasting a video while subscribing to a video as well. Conversations are back and forth between two parties so they both have to send and receive video. This means that CDNs simply don’t offer the feature, and developers who want to build interactive video experiences are stuck cobbling together disparate technologies in ways they were never quite intended.

In the CDN model, the requested data needs to travel from the origin to the edge. Once relayed to the nearest edge server, it then has to establish an individual connection to each client that is trying to access the stream. This is known as “the last mile”, and is a major source of bandwidth consumption for a CDN video streaming solution. Some networks have figured out ways around this issue to reduce their data transmission costs.

Some providers employ WebRTC to boost CDN capacity. As much as 70% of peak traffic can be offloaded using WebRTC, helping CDN suppliers to avoid infrastructure upgrades and enable CDN resellers to do more with their existing budgets.

For example, Peer5, StreamRoot, and StriveCast have created peer to peer sharing networks to shift their overall bandwidth consumption off of their CDN. Rather than having to stream all the content one-on-one from edge to client, they create data channel connections between all the clients streaming the same files. In this way, the video is sent from the origin to the edge server using the efficient, chunked delivery HLS protocol. Once a subscriber is pulling those HLS (.ts) segments, it can establish a P2P connection over the WebRTC data channel to relay those segments to that peer. That peer can then establish a connection with another peer. That connection process is then repeated so that they can all share the same video files. This means that each subscriber doesn’t have to redundantly pull all the segments from the CDN which would charge for that data transfer.

While these peer to peer mesh networks are effective for VOD delivery, they are not effective for low latency live streaming. First, they are still using HLS segments as the source of the streams which will result in problematically high latency. Secondly, this mesh style network does not address the issue of two-way streaming. Additionally, there’s another emerging class of pure WebRTC base providers who don’t use CDNs at all, and in fact have become a complete replacement for the CDN all together. Leading this technological approach is the software we’ve developed here at Red5 Pro.

Rather than fixed data centers, Red5 Pro leverages cloud infrastructure to dynamically deliver live video. This avoids the massive infrastructure costs associated with traditional CDNs and allows for automatic upscaling and downscaling as needed. Furthermore, the edge server acts as a peer in the P2P connections established between the server and the client. In this way a bi-directional connection is established while remaining scalable and efficient.

Red5 Pro also supports the ingress and egress of live streams from a variety of publishers while supporting a wide range of media players. Gaining access to cameras and being able to send that stream out adds important versatility. Our GitHub page features demos of different examples demonstrating the multi-directional flow.

HTML5 SDK for building a webapp:

Our mobile SDK also features:

Since we support a variety of cloud platforms, this is similar to a multi-CDN approach (although we still only use WebRTC for video delivery). However, we are currently working on a new kind of delivery system using decentralized nodes that can run anywhere. More info on that will be coming out in the near future.


Synchronization

Real time latency also unlocks the ability for additional data streamed with the video to synchronize correctly. This opens the ability to add chat functions, live overlays and interactive graphics, virtual chalkboards, live bets and auction bids, GPS data, and many others. For example, a sports broadcast could feature a real-time graphic display which should remain up to date with everything happening on the screen. Correct synchronization when paired with real time latency, would also prevent annoying spoilers leaking out ensuring that tweets or texts don’t ruin the excitement for others. It also makes sure that comments in a chat align with what is currently being shown.

For these use cases, the data can be sent over the WebRTC data channel or separate websocket channels which Red5 Pro has implemented with our SharedObjects method. SharedObjects manage data feeds across multiple clients allowing for the consistent transfer of data. This ensures full interactivity between broadcaster, subscriber, and any extra features.

You can find more examples on our GitHub page:

All this talk about the limitations of CDNs for live streaming may give you the impression that they should be replaced by pure WebRTC solutions like Red5 Pro. However they still serve a very valuable role in video streaming. The CDN is still useful for delivering video on demand content as well as static objects such as websites and still images. However, when it comes to dynamically updated elements such as live video streaming, CDNs are never going to be able to handle them correctly. Like many other elements of technology, the needs of the market have shifted and expanded. CDNs have tried to adapt but their basic HTTP based architecture creates high latency, a one-way streaming limitation, and issues with synchronization. In response to these problems, new models of live streaming architecture emerged to solve them.

Interested in elevating your live streaming to the next level? Send an email to info@red5.net or schedule a call directly. We would love to help.