5 Tips to Correctly Determine Server Size

Terraform Blog - Identical Houses
SHARE

We are often asked about the required server instance that Red5 Pro should be run on. Those that have followed this blog will be unsurprised that the answer is that it depends upon what you are trying to do. To get started, you’ll probably want to take a look at our benchmarks. We’ve already covered… Continue reading 5 Tips to Correctly Determine Server Size

We are often asked about the required server instance that Red5 Pro should be run on. Those that have followed this blog will be unsurprised that the answer is that it depends upon what you are trying to do.

To get started, you’ll probably want to take a look at our benchmarks. We’ve already covered how we conduct our load testing, using our modification of Bees with Machine Guns. Tests were run against an AWS m5.large instance (2 CPUs with 8GB memory, 2GB allocated to java_heap). Scroll to the bottom for other hosting providers.

Publishing a 256kbps stream via RTMP, we were able to achieve the following while still maintaining the quality of stream:

WebRTC = 500 Subscribers

RTSP (Mobile) = 1,800 Subscribers

RTMP = 1,000 Subscribers

The same server type can support approximately 75-80, 480p RTMP publishers.

A Note on Estimating Connections

It should be noted here that we cannot say for certain if increasing CPUs will be directly proportional to the number of connections. In other words, doubling the number of CPUs from 2 to 4, may not double the number of connections.

Furthermore, video size, bitrate, and network conditions will affect the number of streams that each instance can support. Larger server instances should support more streams but we cannot guarantee how many.

Streaming traffic is always unpredictable, so dealing with ad-hoc load involves planning out the scaling policy in advance. Configuring thresholds to provide more capacity than necessary is a good practice.

  1. Set connectionCapacity of the nodes well below estimated capacity to account for unexpected spikes in resource consumption.
  2. Set scale out threshold to a lower value to allow scale out to occur faster to handle your traffic bursts.
  3. Always reserve minimum nodes in your nodegroup to account for the base traffic expected (minimum instance count without scaling out).

Stream Manager

When using our autoscaling solution, a Stream Manager will need to be set up on a separate instance. Again, the recommendation here is a c5.large instance due to the networking support. Best practices are to have two (or more) Stream Managers behind a load balancer for production.

Without proxy, it can probably be a little lighter, but it all depends on the anticipated traffic and number of requests/second.

Production vs. Development

For those focused on creating a POC with basic functionality, an m5 (or even a c5) instance might seem like overkill. The issue is that live streaming video involves a large amount of processing. Due to this fact, we recommend a minimum of 2 CPU even on instances intended for development. Otherwise, the system may lack the power to effectively stream.

WebRTC vs. Mobile

Streaming WebRTC is memory intensive which is why we used an m5 instance with 8 GB. Mobile only applications will use the RTSP protocol which does not require as much memory. Therefore, applications that will only be using the mobile SDK can use a c5.large (2 CPU 4 GB) instead of the m5.large. Development instances might even get away with 3GB of memory.

However, one important thing to consider is the input and output of a server instance. Live streaming involves a high amount of i/o even if you are not sending out a large number of streams. Part of the reason we recommend a c5.large is that it gives you the best value in terms of CPU, memory, and throughput.

Even though there are other instances that might be cheaper and have the same statistics, it might not have the required i/o to work effectively.

Other Cloud Platforms

Although our tests have been conducted using AWS EC2 instances you should expect similar performance with minimal variance on other cloud platforms using instance types of similar configurations.

The M5 instances on AWS belong to the latest generation of General Purpose Instances. Whereas the C5 instances are optimized for compute-intensive workloads.

To generalize the instance type selection on cloud platforms other than AWS, you can use any general purpose instance type having 2 vCPUs, 8 GB memory & a network performance of Up to 10(Gbps) to substitute for an m5.large instance type and any compute optimized instance type having 2 vCPUs, 4 GB memory & a network performance of Up to 10(Gbps) to substitute for a c5.large instance type.

Link Reference:

AWS Instance types: https://aws.amazon.com/ec2/instance-types/

Microsoft Azure Virtual Machine Sizes: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes

Google compute Machine Types: https://cloud.google.com/compute/docs/machine-types

Confused? Looking for more specific recommendations? Find out more by sending an email to info@red5.net or scheduling a call to talk with us directly.