Content Delivery Networks (CDNs) are geographically distributed networks of proxy servers and data centers (caches). Emerging in the ’90s as a way to ease some of the bottlenecks in media delivery (websites, video, etc.), CDNs work by caching information.

To summarize the way a CDN works:

Every time website visitors and application users request for content (e.g., play a video, open a blog post, enlarge a picture) a CDN transmits the data to the closest server to them rather than the original server where the content is hosted. This is done through the use of edge servers that sit in between the end-user and the origin infrastructure.  These servers are distributed geographically (aka points of presence, POP) to ensure that these servers are closer to end-users.

If the edge server doesn’t have the requested content stored in advance, the assigned caching-node requests for it from other servers. These Internet exchange points (IXPs) are the primary locations where different Internet providers connect in order to provide each other access to traffic originating on their different networks. The vast majority of this content is transferred over the HTTP protocol or its secure implementation HTTPS.

While the CDN model still works well for webpage and VOD/OTT delivery, it was never a good approach for live video streaming. In short, using a CDN for live streaming video is broken. This post will discuss that further and present a better way to stream video.


High Latency

Most CDNs use HTTP based protocols for live video streaming such as HLS and MPEG DASH. As we have covered before, this will inherently cause high latency which has an extremely negative effect on live video. How can it be live if there’s a delay?

If HTTP causes latency, why would CDNs still use it?

Quite simply, HTTP delivery forms the basis for the entire internet as we know it. An interconnecting series of Internet Service Providers (ISPs) serve as the basic structure of the internet. Accordingly, the larger infrastructure developed over the years to support those ISPs built out HTTP functionality to support the delivery of all sorts of media.

It’s important to remember that the internet was first developed to send static messages: emails. Subsequent development added images that evolved still further to send small collections of images: an animation or moving image. Video as we know it today would come along later.

While HTTP is well suited for sending this static data, more dynamically generated content such as live streaming video is not a good application for it.

HTTP delivery is effective for CDNs because the video is delivered in chunks. That combined with the fact that it is also stateless means that the chunk can come from any server in the network and the video player client will still be able to receive it. Accordingly, when a request comes in from one IXP, that request can be routed through any available POP for delivery. The way that HLS and MPEG DASH work, players can receive video chunks from different servers and still deliver a complete stream of data.

Furthermore, this infrastructure makes load balancing possible through the use of Domain Name Services (DNS). This process ensures that multiple servers are able to route data for the correct delivery to the subscriber. As such, HTTP delivery is very scalable because it does not depend upon any single server. Each POP is capable of delivering the necessary data.  

However, as mentioned earlier, HTTP delivery is not good for live video streaming due to the high number of requests going back and forth for chunked delivery which –along with other factors– creates high latency.


Fragmented

Another large issue with the use of CDNs is dealing with stream distribution to different regions or countries. Since TCP based HTTP delivery suffers from high latency and queued packets, data centers need to be physically close to the subscriber for the best performance. This creates a situation where different CDN providers have better or worse performance depending upon where they are trying to stream.

Despite their best efforts, there isn’t one universal CDN that has established a data center in every possible city. Like the telecom industry, competitive companies jockeyed for control of different regions. Additionally, geopolitical areas such as China or South America offer delivery challenges as well. Accordingly, a network of multiple CDNs may need to be linked together for effective content delivery.


Managing Multiple CDNs

This leads to the further issue of having to manage a complex network of multiple CDNs. Content providers are faced with the challenge of getting all of their content stored on the different CDN providers. This also means that delivering the same stream across different areas will incur separate charges from different CDNs resulting in more complicated billing. It also means that companies streaming their content need to manage multiple CDN implementations with different APIs and systems for getting their content streaming.

The CDN marketplace has become so complicated that third party companies have emerged to help address this issue. Peer5 recognized the inefficiencies in this system and created a way to optimize across all the CDN networks. Their technology monitors a variety of different CDN platforms and determines which ones will deliver the requested content most effectively.


Caching Doesn't Work for Live

One of the things that help make CDNs work well is that they cache information to make load times faster and access media easier. The approach is to store all the content out in caches in different web servers so clients can grab the requested data chunks as needed.

When a request is made for a data chunk the nearest cache may not have it, or it may have expired. In that case, the edge server makes a request to the origin server to retrieve the requested information. The origin server is the original source for content and is capable of serving all of the content that is available on the CDN. When the edge server receives the response from the origin server, it stores the content in the cache based on the HTTP headers of the response.

As mentioned earlier, this entire process adds latency even if the nearest cache had the required data chunk in the first place. The entire CDN infrastructure depends on those caches which will always produce too much latency for live streaming.


Not Sustainable

Now that the CDN industry has matured, innovation is stagnating. Adding more features, such as security, don’t really have a large impact on their value. The only substantial way that providers can compete with each other is through price since the CDN model treats data essentially as a commodity. That means that there is a race to the bottom as companies undercut each other and eventually undercut themselves to practically zero.

In fact pricing on very large deals is now down to an all-time low of $0.001 per GB delivered which leaves very little room for profit based on traffic alone. Considering that it continues to go down, that does not bode well for CDN companies.

This is where the flexibility of HTTP chunked delivery has come back to bite the CDN industry. Stateless delivery means that it doesn’t matter which CDN network delivers the requested data chunk as it will all be played in sequence.

This means that, unlike with telecom companies, CDN customers are not locked into a specific provider. All CDNs run on the same general delivery methods powering the internet itself. As such, any CDN  can deliver the containers of code and data content that makeup websites or video files. Telecom providers built their own independent networks so you can’t send a message on your AT&T phone over a Verizon cell tower.


Too Big to Change

Considering that technology is a fast-moving marketplace, the technology that drives it should move quickly as well.

It’s taken a long time for CDNs to develop their networks which means there is resistance to change. There is the physical infrastructure they have created to support their business model, including all the personnel training that went into that. That technological infrastructure and training results in an established mindset which entrenches a specific methodology.

Furthermore, there is the bureaucratic issue of crossing the chasm. The cloud infrastructure model that live video delivery is moving towards is the complete opposite of the physical data center CDN model. Moving to a cloud or decentralized model would require duplicating everything that they are already doing. That’s a large amount of work that is all but impossible for deep-rooted organizations to undertake.


The Agility of WebRTC + Red5 Pro

Fully engrained CDNs lack the finesse to pivot to better methods for live video delivery. Other technologies and methodologies have emerged which are better suited to modern video streaming.

Accordingly, Red5 Pro integrated with WebRTC to provide sub-500 milliseconds of latency to millions of concurrent users. With such a low latency protocol, the distance between the broadcast origin and subscribing edge servers is much less important. In fact, you can watch a demo with a round trip latency between Paris and New York City of just 180 milliseconds. Of course, if that isn’t enough for you, try it out for yourself.

Rethinking the average deployment model, Red5 Pro broke away from fixed data centers by leveraging cloud infrastructure to dynamically meet scaling needs. This avoids the massive infrastructure costs associated with traditional CDNs and allows for automatic upscaling and downscaling as needed. With cloud providers, there is no need for running more servers than are currently needed.  

The innovation doesn’t stop there of course. Red5 pro is currently working on a new kind of delivery system using decentralized nodes. More on that to come!

You can also email info@red5pro.com for more information or schedule a call to talk in real-time about real-time live streaming.

  • Share: