What’s Next For RTMP Servers: 4 Considerations

Cloud-based streaming
SHARE

RTMP (Real-Time Messaging Protocol) has a long-established history as one of the original methods for live streaming.  Originally developed by Macromedia and now owned by Adobe. RTMP was designed for delivering on-demand media and live media (i.e live audio, video, and data) over the Internet between a Flash player and a Media Server. However, there… Continue reading What’s Next For RTMP Servers: 4 Considerations


RTMP (Real-Time Messaging Protocol) has a long-established history as one of the original methods for live streaming.  Originally developed by Macromedia and now owned by Adobe. RTMP was designed for delivering on-demand media and live media (i.e live audio, video, and data) over the Internet between a Flash player and a Media Server. However, there are big changes coming for Flash… in that it’s going away.

The new year of 2020 marks the last year of official Flash support. By losing Flash, we also lose the ability to run RTMP in internet browsers. That leads to the question of “What’s next for RTMP servers?”


Latency is Critical

Quite simply, RTMP will be replaced by modern protocols. However, as RTMP created a delivery method for low latency video streaming, so too will the new protocols need to focus on latency as well. This makes latency a critical consideration.

Furthermore, low latency is necessary for creating high-value streaming applications. As we’ve discussed before live streaming should be live and that means delivering media at the fastest possible speed.

The only way to have truly interactive experiences with natural conversation is with the lowest possible latency. Additionally, ever-increasing smartphone adoption is further fueling the demand for real-time latency. As more and more users expect high speeds, the products they use will reflect that expectation.

Now that we’ve established the need, how do we actually achieve sub-second latency. We can approach it from both sides of creating a live stream: ingest (broadcast) and egress (subscribe).


Ingest

Both SRT (Secure Reliable Transport) and WebRTC are good options for ingesting live streams with minimal latency.

SRT is an open-source video transport protocol and technology stack that optimizes live stream delivery through firewalls and across unreliable networks. As SRT was originally designed for hardware encoders and has yet to be adopted as a Web standard, it’s not available in modern browsers.

On the other hand, WebRTC does work in browsers. Its tech stack allows for camera and microphone access on various devices. By using an efficient UDP based transport known as SRTP, WebRTC is able to transport video with the lowest latency currently possible. It can also still maintain a high-quality video, even in less than ideal network conditions.

In short, there are two replacements for RTMP; SRT for hardware encoders, and WebRTC for browsers.

Technically HTTP based protocols such as HLS or CMAF could be considered a replacement but their lackluster performance does not make them a viable option for real-time live streaming video.


Egress

Not only can WebRTC can perform the work of broadcasting (ingest) but it works for receiving video (egress) as well. Since WebRTC works natively in the browsers, you can connect to an egress WebRTC server and consume a video stream over SRTP.  All the decoding (and encoding for that matter) is performed directly in native code so it can be rendered directly in the browser.

Again, HTTP based streaming protocols such as HLS or MPEG DASH are capable of video egress, but they are not a practical choice for low latency streaming.

For more detail on how RTMP works, take a look at our other blog post.


Scaling Servers

No matter what protocol is used, it will need to handle multiple broadcasters and subscribers. For the best performance and distribution of streams, multiple server instances are necessary.

There is a common misconception that WebRTC is not scalable due to how it establishes a peer-to-peer connection. Under traditional modes of thinking this was technically true. However, some creative reconfiguration of scaling infrastructure resulted in the cloud-based Red5 Pro Autoscaling Solution supporting millions of concurrent connections all with under 500 milliseconds of latency.

Red5 Pro reimagined the entire architecture from back-end server to front-end browser and mobile app integration.

By leveraging cloud infrastructure, Red5 Pro’s Autoscaling Solution automatically scales the system up or down in response to current conditions. Under the operating logic of a Stream Manager– a Red5 Pro Server Application that manages traffic and monitors server usage– clusters or NodeGroups– a group of one or more active server nodes– are established in the geographic regions where the streaming will be happening.

More detail can be found in the Red5 Pro Documentation. If you have any questions about scaling or other aspects of low latency live streaming video, please send an email to info@red5.net or schedule a call.