Deploying Simluated Cloud Stream Manager and Autoscaling


If you wish to run the Red5 Pro autoscaling solution on a hosted platform other than AWS or Google, then you can use the Simulated Cloud API.

In order to use the Red5 Pro Stream Manager service you will need the following:

  1. The latest Red5 Pro Server build
  2. The simulated-cloud-controller.jar, from the Red5 Pro Autoscaling Library Extensions section
  3. An active Red5 Pro license key (needed for clustering). REGISTER HERE.
  4. Dedicated linux instances on which you can deploy Red5 Pro.

Before you Begin

You will want to keep a record of the usernames, passwords, IP addresses, and other information generated during the setup process, as you will need the information for Stream Manager configuration and future operations via the API. Click here to download a handy list for tracking all of your Red5 Pro Simulated Cloud Autoscaling values

Reserve a static IP address for your Stream Manager

You will need a static IP address for the Stream Manager in order to configure your nodes.

If you are going to use WebRTC publishing, then you will also need to register a domain name and procure an SSL Certificate for the Stream Manager.

1. Create MySQL Database

NOTE: For Optimal performance, the Database should be geographically close to the Stream Manager.

NOTE: For managed mysql engines you need to allow network access to port 3306 from the Stream Manager instance.

Installing MySQL on Ubuntu 16.04

  • Install MySQL (reference) sudo apt-get install mysql-server - you will be prompted to set the root password.
  • (Optional) Run the MySQL security script sudo mysql_secure_installation to lock down the database as you see fit.
  • Test to make sure that MySQL is running systemctl status mysql.service
  • If MySQL isn't running, you can start it with sudo systemctl start mysql
  • Add non-root user:
    • Connect to database as root mysql -u root -p (enter root password).
    • Create new user: at mysql> prompt: CREATE USER 'mynewuser'@'localhost' IDENTIFIED BY 'goodPassword'; where mynewuser is your dbadmin and goodPassword is your secure password.
    • If you are successful, you will see Query OK, 0 rows affected (0.00 sec)
    • Apply privileges to the new user: GRANT ALL PRIVILEGES ON * . * TO 'mynewuser'@'localhost';
    • Reload the privileges to make sure the new ones are in place: FLUSH PRIVILEGES;
  • Create cluster database:
    • Connect with the new user: type quit to logout root user, then mysql -u mynewuser -p
    • Create database: CREATE DATABASE cluster;
  • Add Cluster schema:
    • Download the Red5 Pro Server Distribution, and unzip on your server.
    • Navigate to {red5prohome}/webapps/streammanager/WEB-INF/sql/ to find the cluster.sql file.
    • Run the sql script: mysql -u mynewuser -p cluster < cluster.sql (you will be prompted to enter the password that you set above).

3. Prepare Red5 Pro Node Instances

NOTE: The following steps need to be done for each server instance to be used as Red5 Pro nodes (origins and edges). NOTE #2: We recommend a minimum of 2 CPUs and 4GB RAM for Red5 Pro node servers.

Install Red5 Pro on the Instance:

  • Connect to your instance over ssh.
  • Follow the Installing Red5Pro on a Linux Server instructions.
  • Make sure that Red5 Pro service is set to auto-start on launch.
  • Red5 Pro Clustering does not work without a license. You will need to purchase a Red5 Pro Professional license from Red5 Pro. Add a file named LICENSE.KEY with your Pro license key (16 characters, plus dashes, like: YOUR-LIC-ENSE-KEY1) to the root of the Red5 Pro Server installation ({red5prohome}/LICENSE.KEY).

Install NTP (network time protocol)

NTP is necessary to ensure that the Stream Manager and all nodes are in sync.

sudo apt-get install ntp

The service should start automatically once installed.

Also, make sure that the server time zone is UTC (this should be the default on AWS instances). Type date at the prompt. This will return the date, time and zone (eg:Tue Dec 13 20:21:49 UTC 2016). If you need to switch to UTC, execute sudo dpkg-reconfigure tzdata, scroll to the bottom of the Continents list and select None of the above; in the second list, select UTC.

Configure Autoscaling on the Instance:

Navigate to the location where you extracted the Red5 Pro files, and edit the file {red5prohome}/conf/autoscale.xml

<bean name="config"class="com.red5pro.clustering.autoscale.Configuration">
   <property name="active" value="false"/>
   <property name="cloudWatchHost" value=""/>
  • Change the active value from “false” to “true
  • Replace with the static ip of the Stream Manager
  • Save and close the file

Set a Unique Cluster Password:

Edit the file {red5prohome}/conf/cluster.xml and replace the default <!-- edge/origin link cluster password --> password from changeme to sometime unique. Make a note of this password, to be used in the Stream Manager file.

NOTE: The cluster password can not contain any CAPITAL LETTERS at this time

Enable Red5 Pro Server API and Set Management Password:

Configure the Server API per this document. Modify the security.accessToken value in {red5pro/webapps/api/WEB-INF/, and make a note of it. You will need this value for your Stream Manager.


Add the Stream Manager's public IP address to the {red5pro}/webapps/api/WEB-INF/security/hosts.txt file (or add * to make the API accessible from anywhere).


Remove Unnecessary Webapps:

To ensure a faster startup time for your server, we suggest deleting any webapps that you will not be using from the {red5prohome}/webapps directory (for example: secondscreen, template, vod, streammanager). For ease of setup for the Stream Manager, leave a zipped copy of the server distribution on the server. If you are running a custom webapp, then you would want to include that on your node servers.

The following incoming ports need to be open on the Red5 Pro node servers:

Port Description Protocol
5080 default web access of Red5 Pro
1935 default Red5 Pro RTMP port TCP
8554 default RTSP port TCP
6262 websockets for HLS TCP
8081 websockets for WebRTC TCP
40000-65000 TURN/STUN/ICE port range for WebRTC UDP

Verifying Red5 Pro on Instance:

Start Red5 Pro service sudo systemctl start red5pro

Once you have started the Red5 Pro Server, you can verify that it is running and available by opening a web browser and navigating to http://<server-ip>:5080, where is the Public address value of your Red5 Pro server.

4. Deploy and Configure Stream Manager

It is critical that the Stream Manager have a static IP address, so that in the event that the instance is rebooted, it will retain the same public IP address.

Install Red5 Pro on the Instance:

Port Description
22 SSH
5080 default web access of Red5 Pro / Stream Manager

Configure the Stream Manager Instance

  • SSH into the Stream Manager instance
  • Stop the Red5 Pro service (sudo systemctl stop red5pro)

Install NTP (network time protocol)

NTP is necessary to ensure that the Stream Manager and all nodes are in sync.

sudo apt-get install ntp

The service should start automatically once installed.

Also, make sure that the server time zone is UTC. Type date at the prompt. This will return the date, time and zone (eg:Tue Dec 13 20:21:49 UTC 2016). If you need to switch to UTC, execute sudo dpkg-reconfigure tzdata, scroll to the bottom of the Continents list and select None of the above; in the second list, select UTC.

  • Navigate to the directory where you installed Red5 Pro (e.g. /usr/local/red5pro)
  • Delete the following files:
    • {red5prohome}/conf/autoscale.xml
    • {red5prohome}/plugins/red5pro-autoscale-plugin-*
    • {red5prohome}/plugins/red5pro-webrtc-plugin-*

Import and Activate Simulated Cloud Controller

Copy the simulated-cloud-controller.jar into {red5prohome}/webapps/streammanager/WEB-INF/lib/

Edit the applicationContext.xml file, located at {red5prohome}/webapps/streammanager/WEB-INF/applicationContext.xml as per the following:

  • Locate the SimulatedCloud controller “bean” and uncomment it as shown below (note: do not modify the values, only uncomment the bean configuration to make it active):

<bean id="apiBridge" class="" init-method="initialize">
   <property name="operationTimeoutMilliseconds" value="${managed.operationTimeoutMilliseconds}"/>
   <property name="regionNames" value="${managed.regionNames}"/>
   <property name="availabilityZoneNames" value="${managed.availabilityZoneNames}"/>
   <property name="recycleDeadNodes" value="${managed.recycleDeadNodes}"/>

Comment out (or delete the entry for) the default controller as shown below to disable it:

<!-- Default CONTROLLER -->
<bean id="apiBridge" class="" init-method="initialize">

Modify Stream Manager App Properties (

The Stream Manager’s configuration details are stored in the file, found in: {red5prohome}/webapps/streammanager/WEB-INF/ This is where Stream Manager reads all its settings from. Each configurable setting is organized into its own section. For more details on the file sections, read here.

You will need to modify the following values:


  • config.dbHost={host} -- the DB Endpoint URL of your MySQL server instance
  • config.dbUser={username} -- Database Master Username
  • config.dbPass={password} -- Database Master Password


  • instancecontroller.replaceDeadClusters=false -- by default, any cluster that becomes unresponsive will be shut down. Setting this value to true will automatically replace any clusters that have failed.
  • instancecontroller.deleteDeadGroupNodesOnCleanUp=true -- by default, any unresponsive nodes will be deleted from the dashboard. Setting this value to false will stop the instances, but not delete them.


  • cluster.password=changeme -- modify this to be the same as the password that you set in the cluster.xml file on your disk image.


You will need to uncomment and modify these properties.

  • managed.regionNames: This attribute takes a comma separated list of region names. For managed instances we create our own region names which represent where the servers are located. The region name should be in the format similar to us-test1. You do not need to add more than one region.
  • managed.availabilityZoneNames: This attribute takes a comma separated list of zone names. For managed instances we create our own zone names which represent where the servers are located. The zone names should be in the format simlilar to us-test1-a. You do not need to add more than one zone.
  • managed.operationTimeoutMilliseconds: This attribute is common to all cloud platform controllers. It controls the max time allowed for a cloud operation. However in the context of SimulatedCloud, this attribute is instead used to set the simulated responseDelay in milliseconds. A responseDelay makes the SimulatedCloud, simulate an operation like a cloud operation.
  • managed.recycleDeadNodes: This attribute configured how the simulated cloud handles dead nodes. Normally when nodes are scaled down they are recycled automatically (ie: They are soft-reset & put back in availability list for reuse). A node that is being deleted because of failure is generally not a healthy node. Hence it is not a candidate for reuse by default. Setting this attribute to true, directs the controller to force reusability of the node. If the value is set to true, the failed node beging deleted will be reused; if set to false then the node will not be reused.


  • rest.administratorToken= -- You need to set a valid password string here before you start using Stream Manager. This is the password that you will use to execute API commands.


  • serverapi.port: The port on which the Red5 Pro http service is accessible. Unless this has been changed explicitly, it defaults to 5080.
  • serverapi.protocol: The protocol over which the http service is accessible. If your instances have ssl certificates on them, then the valus should be https, else it should be http. The default is http.
  • serverapi.version: The api version that is applicable to your Red5 Pro instances. You can get information on the latest serevr api version for the Red5 Pro server api documentation. The default value is v1.
  • serverapi.accessToken: The security access token required to make api calls to the node servers. Use the security access token value that you set on your nodes.


  • proxy.enabled set to true enables, or set to false disables the websocket proxy service. You must use the proxy if you are using WebRTC with Red5 Pro autoscaling.

Please note that if you modify any of the above values after your initial deployment, you will need to restart the Red5 Pro service.

Sample file content:








# compute.project={project-id}
# compute.defaultzone={zone-id}
# compute.defaultdisk=pd-standard
# compute.operationTimeoutMilliseconds={operation-timeout}

# aws.defaultzone={default-region}
# aws.operationTimeoutMilliseconds=200000
# aws.accessKey={account-accessKey}
# aws.accessSecret={account-accessSecret}
# aws.ec2KeyPairName={keyPairName}
# aws.ec2SecurityGroup={securityGroupName}
# aws.defaultVPC={boolean}
# aws.faultZoneBlockMilliseconds=3600000








Edit Launch Configuration File

Launch Configuration JSON files are stored in {red5prohome}/webapps/streammanager/WEB-INF/launchconfigs

The launch configuration file used for a simulated cloud instance is similar to that of a cloud platform instance. The only exception to nots is that there is no image involved in a simulated cloud environment. Hence we keep the value fixed to managed. The capacity will depend on hardware capabilities of the server on which the instance is running.

A sample launch configuration by the name simulated-cloud-default-v2.json is provided at RED5_HOME/webapps/streammanager/WEB-INF/launchconfigs/simulated-cloud-default-v2.json.

Sample Launch Configuration File, simulated-cloud-default-v2.json:


  "launchconfig": {
    "name": "simulated-cloud-default-v2",
    "description": "This is a sample version launch configuration for second generation autoscaling for simulated cloud instances",
    "image": "managed",
    "version": "0.0.2",

    "targets": {
      "target": [
            "role": "origin",
            "instanceType": "standard",
            "connectionCapacity": "500"
            "role": "edge",
            "instanceType": "standard",
            "connectionCapacity": "500"

    "properties": {
      "property": [
          "name": "property-name",
          "value": "property-value"
    "metadata": {
      "meta": [
          "key": "meta-name",
          "value": "meta-value"

The only values you need to edit are:

  • name - the name of the configuration file (for example, for default.json this is "default").
  • connectionCapacity - The number of concurrent connections that can be supported for one instance. This number will be referred to by the autoscale scaleout threshold percentage. The connection capacity will vary depending on your broadcast quality, client connection type, and server type. For QA purposes, you can set this value low (e.g., to 10) so that you can test the autoscaling feature without having to simulate 2,000 concurrent connections.

Edit Scaling Policy File

Scale Policy JSON files are stored in {red5prohome}/webapps/streammanager/WEB-INF/scalingpolicies

Sample Scaling Policy File, default-v2.json

[Second generation autoscaling scale policy]

    "policy": {
        "name": "simulated-cloud-default-v2",
        "description": "This is a sample scale policy file for simulated cloud group",
        "version": "0.0.2",
        "type": "",
        "targets": {
            "target": [
                    "role": "edge",
                    "coolDownPeriod": "180000",
                    "esitmatedWarmUpTime": "120000",
                    "minLimit": "2",
                    "maxLimit": "10",
                    "scaleAdjustment": "1"
                    "role": "origin",
                    "coolDownPeriod": "180000",
                    "esitmatedWarmUpTime": "120000",
                    "minLimit": "1",
                    "maxLimit": "2",
                    "scaleAdjustment": "1"

NOTE 1: Since instances are not actually created as VMs in a simulated cloud environment, it is the resposibility of the administrator to provision sufficient instances to match the scale policy rules. NOTE 2: The entire origin configuration object can be omitted if you do not wish to configure origin maxLimit. In which case it will default to 1.

The only values that you should edit are:

  • edge, minLimit - the minimum number of edges to be in a node group (if you set this to 2, for example, then when you add an origin to a nodegroup, two edge servers will be spun up as well).
  • edge, maxLimit - the maximum number of edges to be in a node group
  • origin, minLimit - the minimum number of origins to be in a node group, (if you set this to 2, for example, then when you add an origin to a nodegroup, a second origin server will be spun up as well).
  • origin, maxLimit - the maximum number of origins to be in a node group .

Start Red5 Pro Service to Use the Stream Manager

sudo systemctl start red5pro

API Basics, to get you started

(for full API, see Red5 Pro Stream Manager API)

Verify Stream Manager Controller (GET)

GET call:

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/debug/cloudcontroller?accessToken=<rest.administratorToken from file>

should return: Simulated Cloud Services

Provision Nodes

Per the Simulated Cloud Provision New Nodes API call. You must have at least two server nodes provisioned before you can create a node group.

POST call: http://<streammanager_IP>:5080/streammanager/api/2.0/admin/controller/provisions?accessToken=<rest.administratorToken from file>

Data (make sure to select JSON as the body type):

    "location": "us-test1-a",
    "host": "<instance-host-address>",
    "instanceType": "standard"
    "location": "us-test1-a",
    "host": "<instance-host-address>",
    "instanceType": "standard"


Create a New Node Group (POST)

Using a tool like Postman, create a new node group via the API. baseCapacity is the Minimum subscriber connections that this group should support. (This parameter helps the scale-in process decide when to scale down an edge).

POST call: http://<streammanager_IP>:5080/streammanager/api/2.0/admin/nodegroup?accessToken=<rest.administratorToken from file>

Data (make sure to select JSON as the body type):

 "regions": [
 "launchConfig": "simulated-cloud-default-v2",
 "scalePolicy": "simulated-cloud-default-v2"


Note the cluster “name” that is returned by the above call. It will be used to create a new Origin server.

Launch New Origin (POST)

After you create a node group, create the origin server. Creating an origin server will also generate at least one edge, per scaling policy min limit.

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/nodegroup/<cluster-name>/node/origin?accessToken=<rest.administratorToken from file>

NOTE: If you wish to launch more than one origin, you can repeat the call. The maximum origins allowed will depend on the maxLimit attribute of the 'origin' object described in scale policy. If 'origin' is omitted from the scale policy then the default value for maximum origins is 1.

Set Alarm Threshold (POST)

By default, the alarm threshold (the capacity percentage at which the cluster will scale up) is set to 60%. To modify this, POST the following:


http://<streammanager_IP>:5080/streammanager/api/2.0/admin/alarm/scaleout/default?type=edge&threshold=<threshold>&accessToken=<rest.administratorToken from file>


http://<streammanager_IP>:5080/streammanager/api/2.0/admin/alarm/scaleout/default?type=origin&threshold=<threshold>&accessToken=<rest.administratorToken from file>

List Groups (GET)

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/nodegroup?accessToken=<rest.administratorToken from file>

Stream Manager Publish and Subscribe Examples

Stream Manager Proxy Publish and Subscribe Examples

With the latest release, the live webapp includes two examples: proxy-publisher.html and proxy-subscriber.html. These examples will take the following query parameters:

Name Description Default Value
host hostname or IP window.location.hostname
protocol protocol which Stream Manager is served over (http or https) window.location.protocol
port port number that Stream Maager is served on window.location.port
app webapp name to stream to on the server live
streamName The unique stream name to broadcast with or subscribe to None. Required
verbose Flag to enable verbose logging in Dev Console None. optional
view Target broadcast tech (rtc, rtmp or hls) None. Optional

Example URI:

Red5 Pro HTML5 SDK Examples:

Publish - Stream Manager

Subscribe - Stream Manager

If you are publishing/subscribing using the Stream Manager SSL Proxy, use:

Publish - Stream Manager Proxy

Subscribe - Stream Manager Proxy

Note: the streaming-html5 examples testbed is included with the Red5 Pro server distribution, and can be accessed via your stream manager at https://your.server.url/webrtcexamples/.

Red5 Pro iOS SDK Examples:

Publish - Stream Manager

Subscribe - Stream Manager

Red5 Pro Android SDK Examples:

Publish - Stream Manager

Subscribe - Stream Manager


  1. If you have created a new nodegroup, but adding an Origin server doesn't automatically spin up an accompanying Edge server, then you probably didn't modify the red5pro/conf/autoscale.xml file on your Red5 Pro nodes to point to the IP address of your Stream Manager (replace in <property name="cloudWatchHost" value=""/>) and/or change active value from default “false” to “true” (<property name="active" value="true"/>)
  2. Simulated cloud uses a local sqlite database to store manage instance provisions. This file is generated in the WEB-INF folder under a directory named by the the controller, ie: RED5_HOME/webapps/streammanager/WEB-INF/Simulated-Cloud-Platform/hostednodes.sqlite. If needed, this file can be edited with any sqllite editor such as the SQLite Manager - Firefox addon.
  3. Instances need to have the Red5 Pro service running before they can be added to the simulated cloud environment via the Provisioning API.
  4. By default when a node is added it is considered to be in an available state in the provision pool. When Stream Manager requests a new instance, the controller will provide a node from this list.
  5. When Stream Manager scales down an instance, it is automatically reset and added back to the provision list for reuse.
  6. When Stream Manager requests deletion of a node, the controller checks the managed.recycleDeadNode flag setting. If it is set to true the controller tries to reprovision the instance if it can (if the node service is reachable). If the node cannot be reprovisioned, it is set to a TERMINATED state.
  7. To reuse a TERMINATED instance, you must manually reboot the remote node server and then use the RESET api call to set the TERMINATED node into a available RUNNING node for use.
  8. Stream Manager logs a warning message when a node is trying to ping it but does not exist in the system yet. With the Simulated Cloud Environment you may see a lot of this because the instances are already running but may not be in use by Stream Manager.