Deploying Stream Manager and Autoscaling on Amazon Web Services (AWS)

Overview:

This document assumes that you have some basic knowledge of AWS EC2 management. It also assumes that you have some basic linux administration skills. If you need more detailed information, please contact us.

In order to use the Red5 Pro Stream Manager service you will need the following:

  1. Latest Red5 Pro Server build
  2. The aws-cloud-controller.jar, from the Red5 Pro Autoscaling Library Extensions section
  3. An active Red5 Pro license key (needed for clustering) REGISTER HERE
  4. An active AWS account

Before you Begin:

NOTE: Because of the structure of the AWS EC2 many of the steps in this process need to be executed against every region in which you choose to include autoscaling.

You will want to keep a record of the usernames, passwords, IP addresses, and other information generated during the setup process, as you will need the information for stream manager configuration and future operations via the API. Click here to download a handy list for tracking all of your Red5 Pro Autoscaling values

1. Get Security Credentials (IAM)

IAM

Obtain security credentials for stream manager AWS account access:

Give the stream manager account full access to EC2:

2. Create EC2 SSH2-RSA Keypair

This Keypair will be used for Instance Authorizing Operations Across All Regions

Amazon EC2 key pairs are region-specific, so you will need to manually create an SSH2-RSA key pair to be used by the stream manager across all regions.

Generate Your SSH2-RSA Key Pair:

On Mac/Linux:

On Windows:

Import Public Key to EC2 Regions:

Note: you will need to repeat this process in every region you wish to include in your autoscaling solution.

To Import your public key:

  1. Navigate to the EC2 Dashboard
  2. Copy your public key to a non-hidden directory (you will not be able to find the .ssh directory in the browse to import step)
  3. In the left-side navigation, under NETWORK & SECURITY click on Key Pairs
  4. Click on Import Key Pair button to open import window. Browse and select the public key (.pub) you just created.
  5. Click import button to finish importing the key pair into your current region.
  6. Navigate to each other region that you wish to include in autoscaling and repeat steps 3-5

importkey

3. Reserve Elastic (Static) IP for Stream Manager

It is critical that the Stream Manager have a static IP address, so that in the event that the instance is rebooted, it will retain the same public IP address.

To reserve an elastic IP address:

4. Create VPCs and Security Groups

To be used for Red5 Pro Autoscaling Node Groups

Note: you will need to repeat this entire four-step process in every region you wish to include in your autoscaling solution.

Create VPC

createVPC * Click on Create VPC

Modify the Subnet Properties

Add Additional Subnets to the VPC, for Each Zone

Repeat the following for all zones that are available in the selected region:

addzones changeroute

Create Security Group with Red5 Pro Ports

IMPORTANT: The security group name will need to be unique within an individual region, and then repeated across all of the regions you with to include in your autoscaling solution. The stream manager uses the security group name to identify where to create a new node.

Port Description
5080 default web access of Red5 Pro
1935 default RTMP port
8554 default RTSP port
6262 websockets (for HLS)
22 SSH (optional)
8088 second screen client (optional)
8081 websocket (for RTC subscriber)

inboundrules

5. Prepare MySQL Database

NOTE: For Optimal performance, the Stream Manager and Database should be in the same Region

To Create the Database for Autoscaling:

Network & Security:

Database Options:

Configure Stream Manager access to this security group:

6. Connect to MySQL and Add Cluster Schema

First - we need to [set log_bin_trust_function_creators to 1 on RDS per] (http://getasysadmin.com/2011/06/amazon-rds-super-privileges/)

To connect to the mysql instance which we just created, we suggest using the freeware MySQL workbench tool, and can be downloaded from https://dev.mysql.com/downloads/workbench/.

7.Prepare Red5 Pro AMI for Nodes

Note: After creating the server AMI, you will need to copy it to every region you wish to include in your autoscaling solution, per https://aws.amazon.com/blogs/aws/ec2-ami-copy-between-regions/

Note #2: The AWS platform does allow multiple images of same name. Please make sure to use a unique name when creating the image to be used for autoscaling. If your region has more than one resource of the same name, the first one in the fetch images api operation will be used.

Note #3: If you are going to be using load-balanced stream managers, then you need to use the loadbalancer dns name in the autoscale.xml file of your AMI.

Launch an Instance and Install Red5Pro

Start a Basic Instance:

Install Red5Pro on the Instance:

Install NTP (network time protocol)

NTP is necessary to ensure that the Stream Manager and all nodes are in sync.

sudo apt-get install ntp

The service should start automatically once installed.

Also, make sure that the server time zone is UTC (this should be the default on AWS instances). Type date at the prompt. This will return the date, time and zone (eg:Tue Dec 13 20:21:49 UTC 2016). If you need to switch to UTC, execute sudo dpkg-reconfigure tzdata, scroll to the bottom of the Continents list and select None of the above; in the second list, select UTC.

Configure Autoscaling on the Instance:

Navigate to the location where you extracted the red5pro files, and edit the file {red5prohome}/conf/autoscale.xml

<bean name="config"class="com.red5pro.clustering.autoscale.Configuration" > <property name="active" value="false"/> <property name="cloudWatchHost" value="http://0.0.0.0:5080/live/cloudwatch"/> </bean>

Enable Clustering:

Edit the {red5pro}/webapps/red5-default.xmlfile. Remove the lines above and below the ClusterServiceResolver line to uncomment it:

 <!-- Uncomment as part of the clustering configuration 
 <bean id="clusterServiceResolver" class="com.red5pro.cluster.plugin.ClusterServiceResolver"/>
 -->

Remove Unnecessary Webapps:

To ensure a faster startup time for your server, we suggest deleting any webapps that you will not be using from the {red5pro}/webapps directory (for example: secondscreen, template, vod, streammanager). For ease of setup for the Stream Manager, leave a zipped copy of the server distribution on the server.

Verifying Red5 Pro on Instance:

Start Red5 Pro service sudo /etc/init.d/red5pro start

Once you have started the Red5 Pro Server, you can verify that it is running and available by opening a web browser and navigating to http://xxx.compute-1.amazonaws.com:5080, where xxx.compute-1.amazonaws.com is the Public DNS value of your instance.

After you've verified, stop the Red5 Pro service sudo /etc/init.d/red5pro stop

Create AMI Image from Instance

To create an AMI, to be used by autoscaling for node creation:

Copy Red5 Pro AMI to Other Regions:

You will need repeat the following steps in each region where you wish to run Red5 Pro autoscaling node groups.

8.Configure and Deploy Stream Manager on AWS

Note: after the Stream Manager instance is started up, you will need to add its Private IP address to the Database Security group, per the DB security step above

New Server Instance:

Port Description
22 SSH
5080 default web access of Red5Pro / Stream Manager

Associate the reserved IP address with this instance:

Configure the Stream Manager Instance

Install NTP (network time protocol)

NTP is necessary to ensure that the Stream Manager and all nodes are in sync.

sudo apt-get install ntp

The service should start automatically once installed.

Also, make sure that the server time zone is UTC (this should be the default on AWS instances). Type date at the prompt. This will return the date, time and zone (eg:Tue Dec 13 20:21:49 UTC 2016). If you need to switch to UTC, execute sudo dpkg-reconfigure tzdata, scroll to the bottom of the Continents list and select None of the above; in the second list, select UTC.

Import and Activate AWS Cloud Controller

Copy the aws-cloud-controller.jar into {red5prohome}/webapps/streammanager/WEB-INF/lib/

Edit the applicationContext.xml file, located at {red5prohome}/webapps/streammanager/WEB-INF/applicationContext.xml per the following:

<!-- AWS CONTROLLER -->
<bean id="apiBridge" class="com.red5pro.services.cloud.aws.component.AWSInstanceController" init-method="initialize">
<property name="accessKey" value="${aws.accessKey}"/>
<property name="accessSecret" value="${aws.accessSecret}"/>
<property name="ec2KeyPairName" value="${aws.ec2KeyPairName}"/>
<property name="ec2SecurityGroup" value="${aws.ec2SecurityGroup}"/>
<property name="defaultZone" value="${aws.defaultzone}"/>
<property name="operationTimeoutMilliseconds" value="${aws.operationTimeoutMilliseconds}"/>
</bean>

Comment out (or delete the entry for) the default controller as shown below to disable it:

<!-- Default CONTROLLER -->
<! -- 
<bean id="apiBridge" class="com.red5pro.services.cloud.sample.component.DummyCloudController" init-method="initialize">
</bean>
 -->

Modify Stream Manager App Properties (red5-web.properties)

Refer to your AWSAutoScalingChecklist.rtf for the data needed to complete this section. For more details on the red5-web.properties file sections, please read here.

The Stream Manager’s configuration details are stored in the red5-web.properties file, found in: {red5prohome}/webapps/streammanager/WEB-INF/red5-web.properties. This is where streammanager reads all its settings from. Each configurable setting is organized into its own section.

You will need to modify the following values:

DATABASE CONFIGURATION SECTION

NODE CONTROLLER CONFIGURATION SECTION - MILLISECONDS

CLUSTER CONFIGURATION INFORMATION

LOADBALANCING CONFIGURATION

AWS CLOUD CONTROLLER CONFIGURATION

You will need to un-comment and configure the following entries:

REST SECURITY SECTION

Please note that if you modify any of the above values after your initial deployment, you will need to restart the Red5 Pro service.

ALARM THRESHOLD (no longer in the properties file)

The autoscaling alarm threshold is no longer set in the red5-web.properties file. Instead, the default value is 60%. If you want to modify this value, do so directly after node group creation using the Rest API for alarms calls. You can set different thresholds for origins and edges via the rest API.

Sample red5-web.properties file content:

 ## RED5 APP CONFIGURATION SECTION - Do Not Tamper
webapp.contextPath=/streammanager
webapp.virtualHosts=localhost, 127.0.0.1

 ## DATABASE CONFIGURATION SECTION
config.dbHost=streammanager-db.abcdefghi.us-west-1.rds.amazonaws.com
config.dbPort=3306
config.dbUser=streammanager
config.dbPass=123445ffff

 ## NODE CONTROLLER CONFIGURATION SECTION - MILLISECONDS
instancecontroller.newNodePingTimeThreshold=150000
instancecontroller.replaceDeadClusters=true
instancecontroller.deleteDeadGroupNodesOnCleanUp=false
instancecontroller.instanceNamePrefix=node
instancecontroller.nodeGroupStateToleranceTime=180000
instancecontroller.nodeStateToleranceTime=180000
instancecontroller.cloudCleanupInterval=180000
instancecontroller.blackListCleanUpTime=600000

## BEST ORIGIN EVALUATION SECTION
instanceevaluator.streams.metricweight=30
instanceevaluator.connections.metricweight=15
instanceevaluator.subscribers.metricweight=60

## CLUSTER CONFIGURATION INFORMATION
cluster.password=changeme
cluster.publicPort=1935
cluster.accessPort=5080
cluster.reportingSpeed=10000
cluster.retryDuration=30

## LOADBALANCING CONFIGURATION
streammanager.ip=

## CLOUD CONTROLLER CONFIGURATION SECTION  - MILLISECONDS

## AWS CLOUD CONTROLLER CONFIGURATION ##
aws.defaultzone=us-west-1b
aws.operationTimeoutMilliseconds=20000
aws.accessKey=ABCDEFGHIJKLMNOPQ
aws.accessSecret=12344556778abcdeFGHIJKLMNOPabud12345
aws.ec2KeyPairName=red5proautoscaling
aws.ec2SecurityGroup=red5pro-nodes
aws.defaultVPC=false

## GOOGLE COMPUTE CLOUD CONTROLLER CONFIGURATION ##
#compute.project={project-id}
#compute.defaultzone={zone-id}
#compute.defaultdisk=pd-standard
#compute.operationTimeoutMilliseconds={operation-timeout}

## Autoscaler SCALE POLICY SECTION - MINUTES
scalepolicy.store=scalingpolicies
scalepolicy.filescandelay=60

## LAUNCH CONFIGURATION SECTION - MINUTES
launchconfig.store=launchconfigs
launchconfig.filescandelay=60

## AUTOSCALING MANAGEMENT SECTION
autoscale.scaleout.mode=competitive

## REST SECURITY SECTION
rest.administratorToken=xyz123

Edit Launch Configuration File

Launch Configuration JSON files are stored in {red5prohome}/webapps/streammanager/WEB-INF/launchconfigs

NOTE: Not all server types are available in all EC2 regions. You may need to create special launch config files for some regions.

Sample Launch Configuration File, default-v2.json:

{
  "launchconfig": {
    "name": "aws-default-v2",
    "description": "This is a sample version launch configuration for second generation autoscaling for aws",
    "image": "red5pro-server-v110",
    "version": "0.0.2",

    "targets": {
      "target": [
        {
            "role": "origin",
            "instanceType": "c4.xlarge",
            "connectionCapacity": "2000"
        },
        {
            "role": "edge",
            "instanceType": "c4.large",
            "connectionCapacity": "2000"
        }       
     ]
    },

    "properties": {
      "property": [
        {
          "name": "network",
          "value": "default"
        }
      ]
    },
    "metadata": {
      "meta": [
        {
          "key": "role",
          "value": "unassigned"
        }
      ]
    }
  }
}

The only values that you need to edit are:

Edit Scaling Policy File

Scale Policy JSON files are stored in {red5prohome}/webapps/streammanager/WEB-INF/scalingpolicies

Sample Scaling Policy File, default-v2.json

[ Second generation autoscaling scale policy ]

{
    "policy": {
        "name": "default-v2",
        "description": "This is a sample scale policy file",
        "version": "0.0.2",
        "type": "com.red5pro.services.autoscaling.model.ScalePolicySchema",
        "targets": {
            "target": [
                {
                    "role": "edge",
                    "coolDownPeriod": "180000",
                    "esitmatedWarmUpTime": "120000",
                    "minLimit": "2",
                    "maxLimit": "10",
                    "scaleAdjustment": "1"
                },
                {
                    "role": "origin",
                    "coolDownPeriod": "180000",
                    "esitmatedWarmUpTime": "120000",
                    "minLimit": "1",
                    "maxLimit": "2",
                    "scaleAdjustment": "1"
                }
            ]
        }
    }
}

NOTE: The entire origin configuration object can be omitted if you do not wish to configure origin maxLimit. In that case it will default to 1.

The only values that you should edit are:

Start Red5 Pro Service to Use the Stream Manager

sudo /etc/init.d/red5pro start

Optional: Load-Balance Multiple Stream Managers

Prerequisites:

Create a Target Group

Create and Configure Load Balancer

Add Stream Manager Instances to the Target Group

loadbalance01

IMPORTANT You will need to create a new disk image - create a new VM from the original AMI, and modify {red5prohome}/conf/autoscale.xml to point to the Load Balancer DNS, then create a new AMI from this VM to use for your nodes.


API Basics, to get you started

(for full API, see Red5 Pro Stream Manager API)

Verify Stream Manager is Using the Correct Controller

GET call:

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/debug/cloudcontroller?accessToken=<rest.administratorToken from red5-web.properties file>

should return: Amazon Web Services

Create a New Node Group (post)

Using a tool like Postman (http://www.getpostman.com/), create a new node group via the API. baseCapacity is the Minimum subscriber connections that this group should support. (This parameter helps the scale-in process decide when to scale down an edge).

POST call:
http://<streammanager_IP>:5080/streammanager/api/2.0/admin/nodegroup?accessToken=<rest.administratorToken from red5-web.properties file>

Data (make sure to select JSON as the body type):

{
 "regions": [
   "us-west-1"
 ],
 "launchConfig": "default-v2",
 "scalePolicy": "default-v2"
}

postman

Note the cluster “name” that is returned by the above call. It will be used to create a new Origin server.

Launch New Origin (post)

After you create a node group, create the origin server. Creating an origin server will also generate at least one edge, per scaling policy min limit.

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/nodegroup/<cluster-name>/node/origin?accessToken=<rest.administratorToken from red5-web.properties file>

NOTE: If you wish to launch more than one origin, you can repeat the call. The maximum origins allowed will depend on the maxLimit attribute of the 'origin' object described in scale policy. If 'origin' is omitted from the scale policy then the default value for maximum origins is 1.

Set Alarm Threshold (POST)

By default, the alarm threshold (the capacity percentage at which the cluster will scale up) is set to 60%. To modify this, POST the following:

FOR EDGE:

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/alarm/scaleout/default?type=edge&threshold=<threshold>&accessToken=<rest.administratorToken from red5-web.properties file>

FOR ORIGIN:

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/alarm/scaleout/default?type=origin&threshold=<threshold>&accessToken=<rest.administratorToken from red5-web.properties file>

LIST GROUPS (get)

http://<streammanager_IP>:5080/streammanager/api/2.0/admin/nodegroup?accessToken=<rest.administratorToken from red5-web.properties file>


Troubleshooting

  1. If you have created a new nodegroup, but adding an Origin server doesn't automatically spin up an accompanying Edge server, then you probably didn't modify the red5pro/conf/autoscale.xml in your Red5Pro server image to point to the IP address of your stream manager (replace 0.0.0.0 in <property name="cloudWatchHost" value="http://0.0.0.0:5080/streammanager/cloudwatch"/>) and/or change active value from default “false” to “true” (<property name="active" value="true"/>)
  2. You can see the role of a node on the Tags tab in the EC2 console instancetags