Deploying Stream Manager and Autoscaling on Google Cloud Compute

Overview:

This document assumes that you have already set up Google Cloud compute and the associated SDK for management (https://cloud.google.com/sdk/). It also assumes that you have some basic linux administration skills. If you need more detailed information, please contact us.

In order to use the Red5 Pro Stream Manager service you will need the following:

  1. Latest Red5 Pro Server build
  2. The google-cloud-controller.jar, from the Red5 Pro Autoscaling Library Extensions section
  3. An active Red5 Pro license key (needed for clustering). REGISTER HERE
  4. A Google Cloud Developer project ID

Before you Begin:

You will want to keep a record of the usernames, passwords, IP addresses, and other information generated during the setup process, as you will need the information for stream manager configuration and future operations via the API. Click here to download a handy list for tracking all of your Red5 Pro Autoscaling values.

1. Reserve Static IP for Stream Manager

It is critical that the Stream Manager have a static IP address, so that in the event that the instance is rebooted, it will retain the same public IP address. You can reserve a static IP address via the Google Cloud SDK. Choose the region in which you wish to host the stream manager:

gcloud compute --project <your-project-id> addresses create streammanagerip --region <preferred-region>

Make a note of the assigned IP address. You will need this for Database permissions and Stream Manager configuration.

2. Create Google Cloud SQL Instance

From the Google Cloud Developer Console:

  1. Select the SQL menu option in the sliding menu pane on left
  2. Click on "Create instance" option
  3. Click on "Choose First Generation" sql setup
  4. Set an instance ID name
  5. Select the region where you reserved the static IP for the stream manager
  6. Select a memory capacity according to estimated usage load
  7. expand “advanced options” - choose defaults except where noted below:
  8. Choose billing plan based on your expected use
  9. Set activation policy to "Always on".
  10. Enable the "Assign an IPv4 address to my Cloud SQL instance." option
  11. Under Authorized Networks +Add Item
  12. enter the IP address that you reserved for the stream manager
  13. Press "Create" button to create the instance (see screenshots below)

sql config sql config

3. Configure SQL Database for Stream Management

Click on the newly created database to go to the Instance Details screen.

Create the “cluster” Database

  1. Click on Databases tab
  2. Under MySQL Databases, click on New database
  3. Name the new database: cluster
  4. Leave the default character set (utf8)
  5. Click Add to finish

sql setup

Add a Database User Account

  1. Click on Access Control (note, you should see your stream manager listed under Authorized Networks on the Authorization tab - if you don’t, then you should add it now)
  2. Click on Users
  3. Click on Create user account
  4. Set a username and password
  5. NOTE: these will be used by the Stream Manager, so make sure to note the username/password you’ve created
  6. Allow any host
  7. Create to add the user account
  8. NOTE: you will not be able to alter permissions once you’ve created the user, but you can update the password through google cloud dashboard

Make a note of the following for stream manager configuration:

  1. SQL Host IP Address
  2. Database access username and password (from step 4)

Populate the Cluster Database Tables

You will find cluster.sql in the {red5prohome}/webapps/streammanager/WEB-INF/sql directory of the latest Red5 Pro server distribution. This file should be used to populate the cluster database. You can do this using MySQL Workbench, or even via the Google Developer Console by:

  1. Under Storage Tab, Storage menu option in the sliding menu pane on left, CREATE BUCKET (for example: autoscaling). Click on your new bucket, then UPLOAD FILES, to add the cluster.sql file to the bucket.
  2. Import the file through the Google Cloud SQL management tool:
  3. From left-hand navigation, choose SQL
  4. Click on your database instance, then click on IMPORT (format of import - SQ6.
  5. Browse to the cluster.sql file. Show advanced options, and enter Database name: "cluster"
  6. Then click on Import. storage

4. Update Firewall Rules and Create Red5 Pro Disk Image

You will need to set up a disk Image that will be used by the Autoscaler when it initiates new server instances. Node Groups will be deployed to the default network, so you will need to enable incoming access to the Red5 Pro server ports on that network.

Note: If you are going to be using load-balanced stream managers, then you need to use the loadbalancer IP in the autoscale.xml file of your disk image.

  • Under Networking tab/Firewall Rules/CREATE FIREWALL RULE. Add a new firewall rule to the default network profile, making the following ports accessible from anywhere, by adding the following to the Allowed protocols and ports field: tcp:5080; tcp:1935; tcp:8554; tcp:8088; tcp:6262; tcp:22
Port Description
22 SSH
5080 default web access of Red5 Pro
1935 default Red5 Pro RTMP port
8554 default RTSP port
6262 websockets (for second screen and HLS)
8088 second screen client registry

Ports required for WebRTC server using SSL:

Port Description Protocol
22 SSH TCP
80 modified web access of Red5 Pro TCP
443 standard HTTPS port TCP
1935 default Red5 Pro RTMP port TCP
8554 default RTSP port TCP
6262 websockets for HLS TCP
8081 websockets for WebRTC TCP
8083 secure websockets for WebRTC TCP
40000-65000 TURN/STUN/ICE port range UDP
![networking](/asset/server/autoscalgooglecloud/defaultnw.png)
![networks](/asset/server/autoscalgooglecloud/firewallrule.png)
![firewall](/asset/server/autoscalgooglecloud/08firewall.png)
  • From Compute Engine, VM, create an Ubuntu (recommended for the small footprint of the OS) virtual machine instance, using the network you created in your default zone. vmiimage

  • Once the instance has spun up, use the Google Cloud SDK to copy the latest version of Red5 Pro server to the instance:

    • gcloud compute copy-files red5pro-server.zip red5pro-image-01:/tmp/
  • SSH into the server and install java and unzip so that you can set up the server:

    • gcloud compute ssh red5pro-image-01
    • follow directions for installing Red5 Pro server on linux
    • $ sudo apt-get update
    • $ sudo apt-get install default-jre
    • $ sudo apt-get install unzip
    • And, depending on your version of Ubuntu, you may also need to install the following libraries to support WebRTC:
    • sudo apt-get install libva1
    • sudo apt-get install libva-drm1
    • sudo apt-get install libva-x11-1
    • sudo apt-get install libvdpau1

Install NTP (network time protocol)

NTP is necessary to ensure that the Stream Manager and all nodes are in sync.

sudo apt-get install ntp

The service should start automatically once installed.

Also, make sure that the server time zone is UTC (this should be the default on Google Cloud instances). Type date at the prompt. This will return the date, time and zone (eg:Tue Dec 13 20:21:49 UTC 2016). If you need to switch to UTC, execute sudo dpkg-reconfigure tzdata, scroll to the bottom of the Continents list and select None of the above; in the second list, select UTC.

Configure Autoscaling on the Instance:

Navigate to the location where you extracted the red5pro files, and edit the file {red5prohome}/conf/autoscale.xml

    <bean name="config" class="com.red5pro.clustering.autoscale.Configuration" >
        <property name="active" value="false"/>

        <!--Stream manager hosted uri. use the host of your stream manager.  -->
                <property name="cloudWatchHost" value="http://0.0.0.0:5080/streammanager/cloudwatch"/>
  • Change the active value from “false” to “true
  • Replace 0.0.0.0 with the static ip (elastic ip) you reserved for stream manager previously

NOTE: if you are going to be using a secure URL for pointing to a load balancer, then you should use that (for example: <property name="cloudWatchHost" value="https://my.streammanager.com/streammanager/cloudwatch"/>)

  • Save and close the file

Enable Clustering and Set a Unique Cluster Password:

Edit the file {red5prohome}/conf/cluster.xml and replace the default <!-- edge/origin link cluster password --> password from changeme to sometime unique. Make a note of this password, to be used in the Stream Manager red5.properties file.

Edit the {red5pro}/webapps/red5-default.xmlfile. Remove the lines above and below the ClusterServiceResolver line to uncomment it:

<!-- Uncomment as part of the clustering configuration
<bean id="clusterServiceResolver" class="com.red5pro.cluster.plugin.ClusterServiceResolver"/>
-->

Remove Unnecessary Webapps:

To ensure a faster startup time for your server, we suggest deleting any webapps that you will not be using from the {red5pro}/webapps directory (for example: secondscreen, template, vod, streammanager; do not remove root). For ease of setup for the Stream Manager, leave a zipped copy of the server distribution on the server.

OPTIONAL:

Start the Red5 Pro service (sudo /etc/init.d/red5pro start) and access port 5080 of the VM's external IP address to ensure that Red5 Pro has started successfully. Then stop the service (sudo /etc/init.d/red5pro stop).

  1. Log out of the instance.

Create Disk Image

From the Google Cloud Dashboard, edit the VM instance and deselect “Delete boot disk when instance is deleted." Save this new configuration, then delete the instance confirm that you are not deleting the boot disk

bootdisk

From Compute Engine, Images:

  • Click on [+] CREATE IMAGE
  • Give the image a name and make a note of it in you Google Cloud Compute AutoScaling Checklist. This will be used in the launchconfig default.json. We suggest naming it per the build version number for easy reference. Note that the name can only contain letters, numbers and hyphens.
  • Source = Disk.
  • Source disk: use the pull-down to select the disk from the instance you just configured and deleted.
  • Click on Create

10acreateimage

5. Stream Manager Configuration

Create a new VM instance using the image you created above.

From Compute Engine, VM Instances:

  • Create an instance
  • Choose the same Zone where your MySQL database was assigned
  • Boot disk - Change - Your image tab, select the server image you just created
  • Boot disk type: Standard persistent disk
  • IMPORTANT: - Compute Engine default service account, Allow full access to all Cloud APIs
  • On networking tab:
  • You can choose the default network profile that you set up for Red5 Pro, or if you want to be more restrictive, the Stream Manager only needs ports 22, for ssh access, and 5080 open. If you are using the Stream Manager as an SSL proxy, then you also need to open port 8083.
  • Under External IP, choose the static IP that you reserved from the Networking tab.

createsm

Copy the google-cloud-controller.jar file up to the server (gcloud compute copy-files google-cloud-controller.jar stream-manager-01:/tmp/)

SSH into the stream manager instance (gcloud compute ssh stream-manager-01)

Stop the Red5 Pro service (sudo /etc/init.d/red5pro stop)

Install NTP (network time protocol)

NTP is necessary to ensure that the Stream Manager and all nodes are in sync.

sudo apt-get install ntp

The service should start automatically once installed.

Also, make sure that the server time zone is UTC (this should be the default on Google Cloud instances). Type date at the prompt. This will return the date, time and zone (eg:Tue Dec 13 20:21:49 UTC 2016). If you need to switch to UTC, execute sudo dpkg-reconfigure tzdata, scroll to the bottom of the Continents list and select None of the above; in the second list, select UTC.

Remove Autoscale Files and WebRTC Plugin

Delete the following files:

  • {red5prohome}/conf/autoscale.xml
  • {red5prohome}/plugins/red5pro-autoscale-plugin-<version>.jar
  • {red5prohome}/plugins/red5pro-webrtc-plugin-<version>.jar

Import Cloud Controller and Activate

Copy the google-cloud-controller.jar into {red5prohome}/webapps/streammanager/WEB-INF/lib/

Edit the applicationContext.xml file, located at {red5prohome}/webapps/streammanager/WEB-INF/applicationContext.xml,

Locate the google controller “bean” and uncomment it as shown below (note: do not modify the values, only uncomment the bean configuration to make it active):

    <!-- GOOGLE COMPUTE CONTROLLER -->
    <bean id="apiBridge" class="com.red5pro.services.cloud.google.component.ComputeInstanceController" init-method="initialize">
       <property name="project" value="${compute.project}"/>
       <property name="defaultZone" value="${compute.defaultzone}"/>
       <property name="defaultDiskType" value="${compute.defaultdisk}"/>
       <property name="operationTimeoutMilliseconds" value="${compute.operationTimeoutMilliseconds}"/>
    </bean>

Comment out (or delete the entry for) the default controller as shown below to disable it:

<!-- Default CONTROLLER -->
<! --
<bean id="apiBridge" class="com.red5pro.services.cloud.sample.component.DummyCloudController" init-method="initialize">
</bean>
 -->

Modify Stream Manager App Properties (red5-web.properties)

(For detailed information, see Red5 Pro Stream Manager User Guide)

The Stream Manager’s configuration details are stored in the red5-web.properties file, found in: {red5prohome}/webapps/streammanager/WEB-INF/red5-web.properties. This is where streammanager reads all its settings from. Each configurable setting is organized into its own section.

You will need to modify the following values:

DATABASE CONFIGURATION SECTION

  • config.dbHost={host} -- the IP address of your MySQL server instance
  • config.dbUser={username} -- username you set to connect to the MySQL instance
  • config.dbPass={password} -- password used to connect to the MySQL instance

NODE CONTROLLER CONFIGURATION SECTION - MILLISECONDS

  • instancecontroller.replaceDeadClusters=false -- by default, any cluster that is becomes unresponsive will be shut down. Setting this value to true will automatically replace any clusters that have failed
  • instancecontroller.deleteDeadGroupNodesOnCleanUp=true -- by default, any unresponsive nodes will be deleted from the dashboard. Setting this value to false will stop the instances, but not delete them.

CLUSTER CONFIGURATION INFORMATION

  • cluster.password=changeme -- modify this to be the same as the password that you set in the cluster.xml file on your disk image.

LOADBALANCING CONFIGURATION

  • streammanager.ip={streammanager-static-ip} -- The static IP address used for stream manager. This parameter is optional for a single stream manager setup. This is required when you wish to setup multiple stream managers behind a load balancer.

GOOGLE COMPUTE CLOUD CONTROLLER CONFIGURATION

You will need to un-comment out the following entries:

  • compute.project={project-id} -- your Google Cloud project ID
  • compute.defaultzone={zone-id} -- the default zone for your Google Cloud project
  • compute.defaultdisk=pd-standard -- do not modify this value
  • compute.operationTimeoutMilliseconds=20000 -- estimated time to start a new VM. We do not recommend modifying this value

AUTOSCALING MANAGEMENT SECTION

  • autoscale.scaleout.inductionTime=30000 -- This value should be approximately 2 - 3 times the value of reportingSpeed property in autoscale.xml file which configures the autoscale plugin configuration.
  • autoscale.scaleout.mode=competitive -- Thsi value need not be edited. The default mode of second generation autoscaling behaviour is 'competitive'.

The reportingSpeed is usually configured when creating a cloud image.

REST SECURITY SECTION

  • rest.administratorToken= -- You need to set a valid password string here before you start using streammanager. This is the password that you will use to execute API commands

WEBSOCKET PROXY SECTION

Please note that if you modify any of the above values after your initial deployment, you will need to restart the Red5 Pro service.

ALARM THRESHOLD (no longer in the properties file)

The autoscaling alarm threshold is no longer set in the red5-web.properties file. Instead, the default value is 60%. If you want to modify this value, do so directly after node group creation using the Rest API for alarms calls. You can set different thresholds for origins and edges via the rest API.

Sample red5-web.properties file content:

 ## RED5 APP CONFIGURATION SECTION - Do Not Tamper
webapp.contextPath=/streammanager
webapp.virtualHosts=localhost, 127.0.0.1

 ## DATABASE CONFIGURATION SECTION
config.dbHost=192.168.0.100
config.dbPort=3306
config.dbUser=admin
config.dbPass=aBcD12345EfGhijk

 ## NODE CONTROLLER CONFIGURATION SECTION - MILLISECONDS
instancecontroller.newNodePingTimeThreshold=150000
instancecontroller.replaceDeadClusters=true
instancecontroller.deleteDeadGroupNodesOnCleanUp=false
instancecontroller.instanceNamePrefix=node
instancecontroller.nodeGroupStateToleranceTime=180000
instancecontroller.nodeStateToleranceTime=180000
instancecontroller.cloudCleanupInterval=180000
instancecontroller.blackListCleanUpTime=600000

## BEST ORIGIN EVALUATION SECTION
instanceevaluator.streams.metricweight=30
instanceevaluator.connections.metricweight=15
instanceevaluator.subscribers.metricweight=60

## CLUSTER CONFIGURATION INFORMATION
cluster.password=changeme
cluster.publicPort=1935
cluster.accessPort=5080
cluster.reportingSpeed=10000
cluster.retryDuration=30

## LOADBALANCING CONFIGURATION
streammanager.ip=

## CLOUD CONTROLLER CONFIGURATION SECTION  - MILLISECONDS

## AWS CLOUD CONTROLLER CONFIGURATION ##
#aws.defaultzone={default-region}
#aws.operationTimeoutMilliseconds={operation-timeout}
#aws.accessKey = {account-accessKey}
#aws.accessSecret = {account-accessSecret}
#aws.ec2KeyPairName = {keyPairName}
#aws.ec2SecurityGroup ={securityGroupName}
#aws.defaultVPC={boolean}

## GOOGLE COMPUTE CLOUD CONTROLLER CONFIGURATION ##
compute.project=root-random-131129
compute.defaultzone=us-east1
compute.defaultdisk=pd-standard
compute.operationTimeoutMilliseconds=20000

## Autoscaler SCALE POLICY SECTION - MINUTES
scalepolicy.store=scalingpolicies
scalepolicy.filescandelay=60

## LAUNCH CONFIGURATION SECTION - MINUTES
launchconfig.store=launchconfigs
launchconfig.filescandelay=60

## AUTOSCALING MANAGEMENT SECTION
autoscale.scaleout.mode=competitive

## REST SECURITY SECTION
rest.administratorToken=xyz123

## WEBSOCKET PROXY SECTION
proxy.enabled=false

Launch and Scaling Policy Configuration Files

The launch and scaling policy configuration files are node-group specific, and are referred to when you create a new node-group via the api. You can have multiple launch and scaling policy configuration files, if you want to have different policies for different node groups. If you add policy files, you need to restart the Red5 Pro service on the Stream Manager.

Note: DO NOT modify a policy file that is being used by an active node group.

Edit Launch Configuration File

Found in {red5prohome}/webapps/streammanager/WEB-INF/launchconfigs/

The only values that you need to edit are:

  • name - the name of the configuration file (for example, for default.json this is "default").
  • image - the name of the Red5 Pro server image that you created above in the Create Red5 Pro Image section, to be used for node creation.
  • instanceType - server instance type (machine-types). We recommend n1-standard-4 for most production environments (standard 4-CPU with 15 GB ram).
  • connectionCapacity - The number of concurrent connections that can be supported for one instance. This number will be referred to by the autoscale scaleout threshold percentage. The connection capacity will vary depending on your broadcast quality, client connection type, and server type. For QA purposes, you can set this value low (e.g., to 20), and set the Alarm Threshold lower than the default 60%, so that you can test the autoscaling feature without having to simulate 1,000 concurrent connections.
  • With the latest version of autoscaling, you can choose different server types and connection capacities for Origins and Edges if you wish.

default-v2.json (sample):

[ Second generation autoscaling launch configuration ]

{
  "launchconfig": {
    "name": "default-v2",
    "description": "This is a sample version launch configuration for development",
    "image": "autoscaling-image-01",
    "version": "0.0.2",

    "targets": {
      "target": [
        {
            "role": "origin",
            "instanceType": "n1-standard-2",
            "connectionCapacity": "2000"
        },
        {
            "role": "edge",
            "instanceType": "n1-standard-4",
            "connectionCapacity": "4000"
        }
     ]
    },

    "properties": {
      "property": [
        {
          "name": "network",
          "value": "default"
        }
      ]
    },
    "metadata": {
      "meta": [
        {
          "key": "meta-name",
          "value": "meta-value"
        }
      ]
    }
  }
}

Edit Scaling Policy File

Scale Policy JSON files are stored in {red5prohome}/webapps/streammanager/WEB-INF/scalingpolicies

Sample Scaling Policy File, default-v2.json

[ Second generation autoscaling scale policy ]

{
    "policy": {
        "name": "default-v2",
        "description": "This is a sample scale policy file",
        "version": "0.0.2",
        "type": "com.red5pro.services.autoscaling.model.ScalePolicySchema",
        "targets": {
            "target": [
                {
                    "role": "edge",
                    "coolDownPeriod": "180000",
                    "estimatedWarmUpTime": "120000",
                    "minLimit": "2",
                    "maxLimit": "10",
                    "scaleAdjustment": "1"
                },
                {
                    "role": "origin",
                    "coolDownPeriod": "180000",
                    "estimatedWarmUpTime": "120000",
                    "minLimit": "1",
                    "maxLimit": "2",
                    "scaleAdjustment": "1"
                }
            ]
        }
    }
}

NOTE: The entire origin configuration object can be omitted if you do not wish to configure origin maxLimit. In that case it will default to 1.

The only values that you should edit are:

  • name - if you have more than one scaling policy, the name needs to match the filename.
  • edge, minLimit - the minimum number of edges to be in a node group (if you set this to 2, for example, then when you add an origin to a nodegroup, two edge servers will be spun up as well).
  • edge, maxLimit - the maximum number of edges to be in a node group.
  • origin, minLimit - the minimum number of origins to be in a node group, (if you set this to 2, for example, then when you add an origin to a nodegroup, a second origin server will be spun up as well).
  • origin, maxLimit - the maximum number of origins to be in a node group.

Start Red5 Pro Service to Use the Stream Manager

sudo /etc/init.d/red5pro start

Optional: Load-Balance Multiple Stream Managers

Prerequisites:

  • One reserved elastic IP address for each Stream Manager.
  • A registered Domain Name to associate with the reserved Load Balancer IP address.
  • Create the first Stream Manager per the above instructions, then create a snapshot from that instance. Build the second stream manager from that snapshot. It is essential that the config files be identical between the two stream managers with one exception:
  • Edit red5pro/webapps/streammanager/WEB-INF/red5-web.properties and modify ## LOADBALANCING CONFIGURATION streammanager.ip=, adding the Assigned IP address of the individual Stream Manager instance you are modifying.
  • Add all Stream Manager public IP addresses to the Database security group.

Under Networking tab choose Load balancing.

loadbalancer01

Click on + CREATE LOAD BALANCER

Choose TCP Load Balancing, click on Start configuration.

loadbalancer02

Internet facing or internal only - choose From internet to my VMs; Connection termination - Do you want to offload SSL processing to the Load Balancer? - choose Yes (SSL Proxy) if you have an SSL cert; otherwise choose No (TCP). Click on Continue

loadbalancer03

Name your load balancer (eg, streammanager-loadbalancer), then click on Backend configuration.

loadbalancer04

Backend configuration: Choose the region where your stream managers are from the pull-down. Click on Select existing instances tab and add your two stream managers.

loadbalancer06

Create a health check: name your health check, and modify HTTP to use port 5080 (Red5 Pro default). You can make the healthy/unhealty threshold (to remove/re-add servers in the pool accordingly) as aggressive as you like.

loadbalancer05

Frontend configuration: click on Create IP address to Reserve a new static IP for the Load Balancer

loadbalancer07 loadbalancer08

Review and finalize: look over the details, then click on Create

IMPORTANT You will need to create a new disk image - create a new VM from the original disk image, and modify {red5prohome}/conf/autoscale.xml to point to the Load Balancer IP address, then create a new disk image from this VM to use for your nodes.


API Basics, to get you started

(for full API, see Red5 Pro Stream Manager API)

Verify Stream Manager is Using the Correct Controller

GET call: http://<streammanager_IP>:5080/streammanager/api/2.0/admin/debug/cloudcontroller?accessToken=<rest.administratorToken from red5-web.properties file>

should return: Google Compute

Create a New Node Group (POST)

Using a tool like Postman, create a new node group via the API. baseCapacity is the Minimum subscriber connections that this group should support. (This parameter helps the scale-in process decide when to scale down an edge).

POST call: http://<streammanager_IP>:5080/streammanager/api/2.0/admin/nodegroup?accessToken=<rest.administratorToken from red5-web.properties file>

Data (make sure to select JSON as the body type):

{
 "regions": [
   "<google region, e.g.: us-east1>"
 ],
 "launchConfig": "default-v2",
 "scalePolicy": "default-v2"
}

postman

Note the cluster “name” that is returned by the above call. It will be used to create a new Origin server.

Launch New Origin (POST) After you create a node group, create the origin server. Creating an origin server will also generate at least one edge, per scaling policy min limit.

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/nodegroup/<cluster-name>/node/origin?accessToken=<rest.administratorToken from red5-web.properties file>

NOTE: If you wish to launch more than one origin, you can repeat the call. The maximum origins allowed will depend on the maxLimit attribute of the 'origin' object described in scale policy. If 'origin' is omitted from the scale policy then the default value for maximum origins is 1.

Set Alarm Threshold (POST)

By default, the alarm threshold (the capacity percentage at which the cluster will scale up) is set to 60%. To modify this, POST the following:

FOR EDGE:

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/alarm/scaleout/default?type=edge&threshold=<threshold>&accessToken=<rest.administratorToken from red5-web.properties file>

FOR ORIGIN:

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/alarm/scaleout/default?type=origin&threshold=<threshold>&accessToken=<rest.administratorToken from red5-web.properties file>

LIST GROUPS (GET)

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/nodegroup?accessToken=<rest.administratorToken from red5-web.properties file>


Stream Manager Publish and Subscribe Examples

Red5 Pro HTML5 SDK Examples:

Publish - Stream Manager

Subscribe - Stream Manager

If you are publishing/subscribing using the Stream Manager SSL Proxy, use:

Publish - Stream Manager Proxy

Subscribe - Stream Manager Proxy

Note: the streaming-html5 examples testbed is included with the Red5 Pro server distribution, and can be accessed via your stream manager at https://your.server.url/webrtcexamples/.

Red5 Pro iOS SDK Examples:

Publish - Stream Manager

Subscribe - Stream Manager

Red5 Pro Android SDK Examples:

Publish - Stream Manager

Subscribe - Stream Manager

Troubleshooting

  1. If you have created a new nodegroup, but adding an Origin server doesn't automatically spin up an accompanying Edge server, then you probably didn't modify the red5pro/conf/autoscale.xml in your Red5Pro server image to point to the IP address of your stream manager (replace 0.0.0.0 in <property name="cloudWatchHost" value="http://0.0.0.0:5080/streammanager/cloudwatch"/>) and/or change active value from default false to true (e.g., <property name="active" value="true"/>)
  2. You can see the role of a node in the Custom metadata on an instance's property page custommeta