Despite what we would have you believe, there's more to life than low-latency. I mean, obviously, there's scaling too.
Perhaps you are streaming out a live concert to millions of fans, or broadcasting an international political summit to thousands of concerned citizens from all across the world. Do you think a single, scrawny little server is going to handle all of that? Of course not.
So that's why you need to scale your application. Better yet, you can scale automatically with Red5 Pro's autoscaling cloud solution. When using Google or AWS hosting platforms, you can set-up a cluster that is managed by a Stream Manager that will automatically add or remove server nodes from a NodeGroup.
Woah-woah-woah, backup. What does all that mean?
A cluster is a group of one or more active servers that make real-time audio, video and/or data streams available for consumption.
Here is a visual representation of the Autoscaling life cycle. As you can see, it generally comprises of two types of operations: scale-out (expansion) and scale-in (contraction):
And here is a representation of the Stream Manager Stream Operations illustrating the streaming lifecycle:
For more information, please see our documentation.
In autoscaling, a NodeGroup is a collection of server nodes. The nodes are Red5 Pro server instances that can be either edges or origins. Simply put, the origin accepts broadcasters, the edge accepts subscribers. This NodeGroup is managed by a Stream Manager (a Red5 Pro Server Application that manages traffic and monitors server usage). The Stream Manager works in real-time as it processes live stream information to add or remove servers depending on the current traffic demands.
Considering this all happens automatically, there is no concern that your application will crash from a surge of activity, planned or otherwise. Not only that it acts as a backup server ensuring your application never goes down.
Most importantly, that all happens with under 500ms of latency. Thousands, hundreds of thousands even millions; no matter how many stream connections you need, you will get the same performance server after server, and user after user.