Red5 Pro Stream Manager User Guide

Red5 Pro Stream Manager is a streaming architecture management and information service which helps automate the process of creating and deleting Red5 Pro server instances. Stream Manager also coordinates between broadcasters and subscribers to help the find the right servers for their broadcast and subscribe actions respectively.

Red5 Pro Stream Manager provides you with accurate stream statistics over simple HTTP-based REST API calls, once stream publishing has started for the stream.

Red5 Pro Stream Manager provides automatic traffic management with the Red5 Pro Autoscaler component. Your server fleet automatically expands and contracts as traffic increases and decreases over time. This reduces unnecessary server usage, thereby reducing your cloud platform bills.


Concepts

Node

A Node refers to a single server instance on the streaming architecture. Each Node may belong to a NodeGroup with a specific instance role (EDGE, ORIGIN) assigned to it.

A Node is configured using the launch configuration defined in the NodeGroup that it belongs to. The lifecycle of a Node is tied to the NodeGroup that it belongs to. Once the group is set to be deleted, each node belonging to the group is deleted as well.

NodeGroup

NodeGroup is a concept of virtually categorizing one or more servers into a group. Each nodegroup is identified uniquely using a group id or name.

Each group can have one or more origin(s) and one or more edge(s) in it. You can push one or more origin(s) into a group while initializing it. No origin(s) can be added if the group has even one edge. You you want more than one origin in your grouyp you need to add it before your first origin has completed its instance warm-up time.

Each group also has an associated launch configuration defined which describes what kind of cloud platform machine to use when launching instances for this group. Each node type in a group can have its own launch configuration definiton using the launch configuration file.

Throughout this document we may use the term NodeGroup of cluster to describe a group of servers with one or more origin(s) and one or more edge(s).

Red5 Pro Stream Manager

Red5 Pro Stream Manager is a Red5 Pro web application which encapsulates and manages various responsibilities of a Red5 Pro streaming architecture such as:

  • Cloud platform instance deployment
  • Red5 Pro server node management
  • Red5 Pro NodeGroup management
  • Red5 Pro Autoscaling
  • Red5 Pro Node monitoring
  • Coordinating publishers and subscribers to appropriate node endpoints for stream requests

Red5CloudWatch

Red5CloudWatch is a subcomponent within Red5 Pro Stream Manager which acts as a communication bridge between a active Red5 Pro node and the stream manager.

Technically it is a Java servlet which formats and relays incoming HTTP calls from a Red5 Pro Node to the Red5CloudWatchClient subcomponent which is located inside the Stream Manager as well.

The Red5CloudWatch relays important notifications such as:

  • Cluster Report: NodeGroup statistics about load and streams
  • Stream publish notification
  • Stream unpublish notification
  • Node role assignment call

Red5CloudWatchClient

Red5CloudWatchClient is a subcomponent inside Red5 Pro Stream Manager which is responsible for processing notifications relayed by the Red5CloudWatch servlet.

The Red5CloudWatchClient also works as an alarm manager, to processes alarm conditions and dispatch appropriate notification to the Autoscaler component for each NodeGroup.

CloudSweeper

CloudSweeper is a internal cron processes that check for stranded nodes, stranded node groups and unresponsive origin nodes. You can configure it via the instancecontroller.cloudCleanupInterval property in the red5-web.properties file.

DeadNodeCleaner

DeadNodeCleaner is an internal cron processes that check for stranded nodes, stranded node groups and unresponsive origin nodes. Streammanager automatically configures the run-time for this cron process using the reportingSpeed property value from the CLUSTER CONFIGURATION SECTION located in the red5-web.properties file

NodeSynchronizer

NodeSyncronizer is an internal one time task which runs when stream manager initializes and attempts to sync the IP addresses of nodes in database from those on the cloud platform.

Autoscaler

The Autoscaler component in Red5 Pro Stream Manager handles autoscaling activities. The Autoscaler receives alarm notifications from the Red5CloudWatchClient accompanied by any necessary data.

The Autoscaler then uses information such as launch configuration and scale policy for the group to perform autoscaling operation as dictated by the alarm.

Once an autoscale operation is launched successfully, the Autoscaler invokes a cool down for the current NodeGroup to ensure that it does not react to any more alarms from the NodeGroup until the current autoscale operation was effective.

Launch Configuration

A launch configuration is a configuration definition designed to help the Red5 Pro Stream Manager launch instances on the cloud platform.

Configurations are stored on Stream Manager local file system as JSON files. Since each cloud platform has a fixed number of VM machine types, we can have a small fixed number of configuration files deployed on the Stream Manager's file system. Each launch configuration defines an important set of parameters such as Red5 Pro image name, machine type to use, estimated connection capacity of the instance, etc.

UPDATE:

As streammanager autoscaling feature has undergone changes, the launch configuration format has evolved to better define the launch configuration information. The first generation of autoscaling (older version), was simple but rigid and the configuration file focused on defining origins and edge information very discreetly. The second generation (current version) makes the configuration information more flexible, futuristic and detailing.

FIRST GENERATION LAUNCH CONFIGURATION

  • Rigid in defining role based configuration, specific to edge and origin
  • Same connection capacity definition for all node types.
  • Same instance type definition for all node types.

A first generation autoscaling launch configuration template will looks like this:

File: standard-casual.json

{
  "launchconfig": {
    "name": "standard-casual",
    "description": "First generation launch configuration template sample",
    "image": "<image-name>",
    "instanceType": "<instance-type>",
    "connectionCapacity": "<instance-capacity>",
    "properties": {
      "property": [
        {
          "name": "<property-name>",
          "value": "<property-value>"
        }
      ]
    },
    "metadata": {
      "meta": [
        {
          "key": "<meta-name>",
          "value": "<meta-value>"
        }
      ]
    }
  }
}

SECOND GENERATION LAUNCH CONFIGURATION

Second generation autoscaling launch configuration introduces more control over autoscaling feature such as :

  • Different instance types for origins and edges.
  • Different connection capacities for origins and edges.
  • Configuration file versioning for better information
  • Instance role-based configuration section in JSON file

A stream manager autoscaling (second generation) launch configuration template will looks like this:

{
  "launchconfig": {
    "name": "standard-casual-v2",
    "description": "Second generation launch configuration template sample",
    "image": "<image-name>",
    "version": "0.0.2",

    "targets": {
      "target": [
        {
            "role": "origin",
            "instanceType": "<instance-type>",
            "connectionCapacity": "<instance-capacity>"
        },
        {
            "role": "edge",
            "instanceType": "<instance-type>",
            "connectionCapacity": "<instance-capacity>"
        }
     ]
    },

    "properties": {
      "property": [
        {
          "name": "<property-name>",
          "value": "<property-value>"
        }
      ]
    },
    "metadata": {
      "meta": [
        {
          "key": "<meta-name>",
          "value": "<meta-value>"
        }
      ]
    }
  }
}

NOTE: The latest streammanager back-supports first generation format. However first generation format is deprecated, and it is recommended that going forward, you use the second generation format only.

Scale Policy

A Scale Policy defines simple rules for scaling a NodeGroup. Generally we have two types of scaling activities: scale-in (contracting a group) and scale-out (expanding a group). A policy directs how to scale-in / scale-out a group. Policies are stored on Stream Manager’s local file system as JSON files.

UPDATE

As streammanager autoscaling feature has undergone changes, the scale policy has also evolved to better define scaling rules. The first generation of autoscaling was simple but rigid and focused on defining origins and edge information in a specific manner. Additionally there was no information definable for origin nodes.

The second generation (current version) provides role-targeted scaling information and allows you to completely omit the origin entry if you dont want to specify any information regarding it. In addition, the second generation allows each nodegroup to follow a different scale policy.

FIRST GENERATION SCALE POLICY

  • Rigid in defining configuration, specific to edge and origin
  • No support for origin information
  • Mandatory inclusion of origin config object.

First generation autoscaling stream manager build allowed only one scale policy file (default) for all nodegroups in the system.

A first generation scale policy template will looks like this:

File: default.json

{
    "policy": {
        "name": "default>",
        "description": "<policy-description>",
        "version": "0.0.1",
        "type": "com.red5pro.autoscaling.policy.SimpleScaling",
        "origin": {
            "_required": "false"
        },
        "edge": {
            "coolDownPeriod": "<cooldown-period-in-milliseconds>",
            "estimatedWarmUpTime": "<warmup-time-in-milliseconds>",
            "minLimit": "<minimum-edges>",
            "maxLimit": "<maximum-edges>",
            "scaleAdjustment": "<scale-step>",
            "_required": "true"
        }
    }
}

SECOND GENERATION SCALE POLICY

  • Role targeted scale configuration data
  • Supports origin configuration data
  • Origin config object can be excluded if it is not needed. Edge config data is mandatory.

Second generation autoscaling stream manager build allows different scale policy files for different nodegroups in the system.

A second generation scale policy template will looks like this:

File: default-v2.json

{
    "policy": {
        "name": "default-v2",
        "description": "<policy-description",
        "version": "0.0.2",
        "type": "com.red5pro.services.autoscaling.model.ScalePolicySchema",
        "targets": {
            "target": [
                {
                    "role": "edge",
                    "coolDownPeriod": "<cooldown-period-in-milliseconds>",
                    "estimatedWarmUpTime": "<warmup-time-in-milliseconds>",
                    "minLimit": "<minimum-edges>",
                    "maxLimit": "<maximum-edges>",
                    "scaleAdjustment": "<scale-step>",
                },
                 {
                    "role": "origin",
                    "coolDownPeriod": "<cooldown-period-in-milliseconds>",
                    "estimatedWarmUpTime": "<warmup-time-in-milliseconds>",
                    "minLimit": "<minimum-edges>",
                    "maxLimit": "<maximum-edges>",
                    "scaleAdjustment": "<scale-step>",
                }
            ]
        }
    }
}

Cool Down Period

Cool Down Period is a time span that the autoscaling mechanism will relax for, after an autoscaling operation is done for the NodeGroup. This ensures that once autoscaling starts a new instance, it waits till the new node becomes effectively a part of the intended NodeGroup and the load is balanced before a new Alarm is considered.

Generally the Cool Down Period should be estimated using the following equation:instance warm up time + approximate clustering time in which, clustering is an additional time buffer and is considered the time needed for the induction of the new node into the node group.

The current release of Autoscaler is designed to work with Google Compute Platform and Amazon Web Services.

The warmup time for instances for a Google Cloud Platform is estimated by the following equation: warm up time + induction time in which, induction time is the time taken for a node to be inducted into a group after it is ready.

As an example:

105 seconds + 15 seconds tolerance = 120 seconds

This can be broken down as:

  • 45 seconds for machine startup
  • 45 seconds for Red5 Pro service to become active
  • 15 seconds for new node induction into group

NOTE : Streammanager honors coolDownPeriod only when it runs in rigid mode autoscaling. By default the autoscaling mode is set to competitive.

Instance Warm Up Time

Instance Warm Up Time refers to the time required by a new Red5 Pro instance to reach a stable INSERVICE state from the time it was launched.

This time needs to be evaluated after careful testing. It may very well depend on your choice of cloud platform's instance type offering and any special settings that your instance configuration may contain.

With Google Compute Platform, the Warm Up Time for instances is estimated to be 105 seconds.

This can be broken down as:

  • 45 seconds for machine startup
  • 45 seconds for Red5 Pro service to become active

Tested with compute machine types : g1-small and n1-standard-1

Metric

Metrics are system attributes which influence scaling decisions - e.g., CPU usage, RAM usage, etc. In context of Red5 and Red5 Pro, important metrics would be: CPU usage, JVM memory, connections and streams.

Metrics are used to set Alarms internally by defining Thesholds. When a Metric value violates the defined threshold value it is termed as a "Threshold violation".

The current of Autoscaler concerns itself with only CONNECTION LOAD PERCENTAGE and hence the only Metric it deals with is CONNECTIONS.

Threshold

Threshold(s) are bounds for metric values. For example a upper threshold for connection may be defined as 80% which is the trigger point of an Alarm.

Thus if the connections load goes over 80%, for example, it would indicate a threshold violation and would result in the respective Alarm being triggered. Threshold is encapsulated within an Alarm as a of Alarm definition.

Alarm

Alarms are objects that define a condition for Autoscaler. Each alarm is meant to monitor a metric for threshold violation against a defined threshold condition (upper or lower).

When a metric value breaches its defined threshold boundary, the system will trigger an event causing a notification to be sent from the alarm evaluator Red5CloudWatchClient to the Autoscaler component. The alarm may carry additional data about the reporting Node or NodeGroup and an actionTag implying what action should be taken by the Autoscaler.

The current version of Autoscaler defines a single alarm condition targeting GroupEdgeScaleOut action which is applied for every NodeGroup in the system. In the current version of Autoscaler we can configure the alarm threshold value in the Red5 Pro Stream Manager configuration file.

Action

Actions are responses triggered by the Autoscaler due to an Alarm. Actions carry the actual autoscaling logic.

Stream Manager represents Actions using tags, and later maps these tags to appropriate action logic implementations. Each Action is associated with one Alarm at a time.

Sample Action Tags

  • GROUPEDGESCALEOUT: Implies that a edge scale out operation is required
  • NEWCLUSTERINITIALIZE: Implies that a cluster initialization operation is required

BEST ORIGIN SELECTION

Second generation autoscaling also adds a exciting new feature which enabled stream manager to always estimate the best origin for a broadcast request.

Best origin selection logic uses a node-metric-weight evaluation mechanism for select the best origin. The InstanceMetricsRuleManager component uses predefined metric rules were selected metrics are targeted with predefined weight which project their importance relative to other metrics. A current weight is evaluated for all origins using their current metric value and overall metric weight. The origin score refers to the sum of all metric weights. Origin with the highest score is selected for a broadcast request.

For metric-weight calculations stream manager defines certain dynamic metrics such as clientCount, publisherCount and edgeSubscriberCount for an origin. The weight of theme dynamic metrics is evaluated at runtime using live statistics receieved in the cluster report.

DYNAMIC METRICS EVALUATION

Dynamic metrics are registered in the applicationContext.xml file (RED5_HOME/webapps/streammanager/WEB-INF/applicationContext.xml).

<!--  origin score evaluation metrics and components start -->

<bean id="freeConnectionSlotsMetric" class="com.red5pro.services.streammanager.nodes.metrics.MetricRule">
<property name="metricName"><value>clientCount</value></property>
<property name="unit"><value>PERCENTAGE</value></property>
<property name="minValue"><value>0</value></property>
<property name="maxValue">  <value>100</value></property>
<property name="direction"><value>LOWERBETTER</value></property>
<property name="metricWeight"><value>${instanceevaluator.streams.metricweight}</value></property>
</bean>

<bean id="streamCountMetric" class="com.red5pro.services.streammanager.nodes.metrics.MetricRule">
<property name="metricName"><value>publisherCount</value></property>
<property name="unit"><value>PERCENTAGE</value></property>
<property name="minValue"><value>0</value></property>
<property name="maxValue">  <value>100</value></property>
<property name="direction"><value>LOWERBETTER</value></property>
<property name="metricWeight"><value>${instanceevaluator.streams.metricweight}</value></property>
</bean>

<bean id="subscriberCountMetric" class="com.red5pro.services.streammanager.nodes.metrics.MetricRule">
<property name="metricName"><value>edgeSubscriberCount</value></property>
<property name="unit"><value>PERCENTAGE</value></property>
<property name="minValue"><value>0</value></property>
<property name="maxValue">  <value>100</value></property>
<property name="direction"><value>LOWERBETTER</value></property>
<property name="metricWeight"><value>${instanceevaluator.subscribers.metricweight}</value></property>
</bean>

<bean id="serverMetricsEvaluator" class="com.red5pro.services.streammanager.nodes.component.InstanceMetricsRuleManager">
<property name="originMetricRules">
<list value-type="com.red5pro.services.streammanager.nodes.metrics.MetricRule">
<ref bean="freeConnectionSlotsMetric"></ref>  <!-- dynamically injected metrics -->
<ref bean="streamCountMetric"></ref>  <!-- dynamically injected metrics →
<ref bean="subscriberCountMetric"></ref>  <!-- dynamically injected metrics -->
</list>
</property>
</bean>

<!--  origin score evaluation metrics and components end -->

Each of the three dynamic metrics is expressed in percentage. The weight value supplied during initialization is attached to the max (100%). As the load of the origin or its edges increases the overall weight decreases. This is then compared with the overall weight of other origins to select the best origin. To exclude a dynamic metric from origin score calculation, omit the metric from the 'originMetricRules' list property in the serverMetricsEvaluator bean.

Generally you do not need to edit anything in the applicationContext file. You can control most of the required settings from the red5-web.properties file.

Refer to BEST ORIGIN EVALUATION SECTION in red5-web.properties ({red5prohome}/webapps/streammanager/WEB-INF/red5-web.properties).
The different properties of the BEST ORIGIN EVALUATION SECTION are explained in details in the Red5 Pro Stream Manager User Guide.

LOAD BALANCED STREAMMANAGER

Stream manager application now supports load balancing. This means that in an occasion of anticipated heavy traffic, you can set up more than one stream manager behind the cloud platform's load balancer service. This is ensure that traffic request such as (broadcast / subscribe) is evenly distributed between multiple stream manager instances to prevent flooding of requests on one instance.

Multiple stream managers still interact with a single shared database. So if your traffic needs will be high make sure to set-up a higher configuration database instance to host the RDS.

Since the time synchronization between multiple stream managers is based on UTC it is important that the system clock of the stream manager instance be accurate and managed by a reliable NTP service. Administrator must setup NTP service on the VM instance prior to setting up streammanager.


Locating The Stream Manager Application

The Stream Manager application comes packaged with your Red5 Pro distribution. You can locate streammanager in the webapps directory of your Red5 Pro installation directory: {red5prohome}/webapps/streammanager Configurable files can be located inside the WEB-INF folder:{red5prohome}/webapps/streammanager/WEB-INF

NOTE:Any changes made to Stream Manager configuration files will require restarting the Red5 Pro server service.

Stream Manager Configurable Properties

red5-web.properties Configuration File

Properties given below can be configured in the Stream Manager configuration file red5-web.properties located at:{streammanager}/WEB-INF/classes/red5-web.properties.

DATABASE CONFIGURATION SECTION

config.dbHost={host}
config.dbPort=3306
config.dbUser={username}
config.dbPass={password}

config.dbHost

Configures the database host IP Address for Red5 Pro Stream Manager. This property is required for proper functioning of the Stream Manager's database operations.

config.dbPort

Configures the database port for Red5 Pro Stream Manager. This property is required for proper functioning of Stream Manager's database operations. The default value for this property is 3306 since MySQL servers are configured to run on 3306.

config.dbUser

Configures the database username for Red5 Pro Stream Manager. This property is required for proper functioning of the Stream Manager's database operations. This must correspond to your database account access credentials.

config.dbPass

Configures the database password for Red5 Pro Stream Manager. This property is required for proper functioning of Stream Manager's database operations. This must correspond to your database account access credentials.

NODE CONTROLLER CONFIGURATION SECTION

instancecontroller.newNodePingTimeThreshold=150000
instancecontroller.replaceDeadClusters=true
instancecontroller.deleteDeadGroupNodesOnCleanUp=true
instancecontroller.instanceNamePrefix=node
instancecontroller.nodeGroupStateToleranceTime=180000
instancecontroller.nodeStateToleranceTime=180000
instancecontroller.cloudCleanupInterval=180000
instancecontroller.blackListCleanUpTime=600000

instancecontroller.newNodePingTimeThreshold

Configures the maximum expected ping time of a newly launched Red5 Pro instance. This value is configured in milliseconds. This time takes into account the time required for instance startup and Red5 service boot up. New nodes that are unable to ping the stream manager within the expected time (newNodePingTimeThreshold) are assumed to be out of service or dead nodes.

instancecontroller.replaceDeadClusters

Configures whether to replace a dead Node Group with a new one or not. A Node Group is considered dead by the Stream Manager if its origin hasn't pinged Stream Manager for a long time (governed by instancecontroller.nodePingTimeThreshold).

  • If this property is set to true, Stream Manager replaces a dead Node Group with a new one with the same group configuration.
  • Setting this property to false ensures that dead clusters are cleaned up from the system without any replacement.

instancecontroller.deleteDeadGroupNodesOnCleanUp

Configures whether termination of nodes belonging to a dead Node Group implies permanently deleting the instances on the cloud platform or merely stopping them. A node group is considered dead by Stream Manager if its origin hasn't pinged Stream Manager for a long time (governed by instancecontroller.nodePingTimeThreshold).

  • If this property is set to true, Stream Manager ensures that instances of the dead Node Group are permanently deleted from the cloud platform.
  • Setting this property to false means that cloud instances of a dead Node Group will be stopped and not permanently deleted from the cloud platform.

instancecontroller.instanceNamePrefix

Configures the name that will pre-pend any automatically created nodes. The stream manager uses this to search for and remove any nodes that have been stopped or are not communicating with the Stream Manager and need to be removed and/or replaced. If you are hosting multiple autoscaling environments within a hosting platform (a single Google Compute Engine Project for example), it is critical that this name be unique across solutions because if a Stream Manager detects a host with the correct naming prefix that is not in its database, it will shut down that node.

instancecontroller.nodeGroupStateToleranceTime

Configures the net time (in milliseconds) to wait before declaring a node group as stranded. A stranded node group is commonly a group which is in a TERMINATING state for longer than instancecontroller.nodeGroupStateToleranceTime milliseconds.

instancecontroller.nodeStateToleranceTime

Configures the net time (in milliseconds) to wait before declaring a node as stranded. A stranded node is commonly a node which is in a state other than INSERVICE state for longer than instancecontroller.nodeStateToleranceTime milliseconds.

instancecontroller.cloudCleanupInterval

Configures the net time (in milliseconds) to run the CloudSweeper job. This is a process that checks for nodes running on cloud platform spawned from current streammanager but not registered in database.

instancecontroller.blackListToleranceTime

Configures the net time (in milliseconds) to tolerate a CloudSweeperdetected unwanted instance in a temporary blacklist buffer before it is terminated.

instancecontroller.blackListCleanUpTime

Configures the net time (in milliseconds) to clean up CloudSweeper blacklist. Blacklist cleanup accounts for any instance that was detected once but never detected henceforth. This may happen if a user manually deletes the instance on cloud.

BEST ORIGIN EVALUATION SECTION

instanceevaluator.streams.metricweight=30
instanceevaluator.connections.metricweight=15
instanceevaluator.subscribers.metricweight=60

instanceevaluator.streams.metricweight

Refers to a weight value attached to maximum stream count percentage on origin (100). Weightage of the current metric value is evaluated at runtime dynamically. The stream count metric is internally evaluated in percentage. This helps select an origin with minimum broadcast streams.

instanceevaluator.connections.metricweight

Refers to a weight value attached to maximum connection count percentage on origin (100). Weightage of the current metric value is evaluated at runtime dynamically. This helps select an origin with minimum connections.

instanceevaluator.subscribers.metricweight

Refers to a weight value attached to the net subscribers percentage on all the edges combined for a an origin (100). Weightage of the current metric value is evaluated at runtime dynamically. This helps select an origin with minimum subscribers on edge.

CLUSTER CONFIGURATION SECTION

cluster.password=changeme
cluster.publicPort=1935
cluster.accessPort=5080
cluster.reportingSpeed=10000
cluster.retryDuration=30

cluster.password

The cluster password required for dynamic clustering of nodes..This property can be also be found in the {RED5_HOME}/conf/cluster.xml file and the values must match. The cluster password value defaults to changeme. Streammanager uses the password to authenticate before making cluster api calls to nodes.

cluster.publicPort

The public RTMP port used by the cluster nodes to communicate with each other internally..This property can be also be found in the {RED5_HOME}/conf/cluster.xml file and the values must match.

cluster.accessPort

The public HTTP port over which the Red5pro node can be accessed publically.This property can be also be found in the {RED5_HOME}/conf/cluster.xml file and the values must match.

cluster.reportingSpeed

The time-period in which a clustered node repeatedly dispatches a statistics report to the stream manager.The report contains the clustering relationship and load statistics information. The reportingSpeed parameter is internally used by streammanager to evaluate values for other properties.The value is expressed in milliseconds.This property can be also be found in the {RED5_HOME}/conf/cluster.xml file and the values must match.

cluster.retryDuration

The time-period in which a clustered child node triesd to reconnect with its parent (if xconnectivity is lost).This property can be also be found in the {RED5_HOME}/conf/cluster.xml file and the values must match.


LOADBALANCING CONFIGURATION

streammanager.ip=

streammanager.ip

The IP address of the current stream manager instance. When deploying multiple stream managers with a load balancer, each instance should define its own IP here.Thsi is an optional parameter for a single stream manager based deployment.

CLOUD CONTROLLER CONFIGURATION SECTION

GOOGLE COMPUTE CONFIGURATION SECTION

compute.project={project-id}
compute.defaultzone={zone-id}
compute.defaultdisk=pd-standard
compute.operationTimeoutMilliseconds=200000

compute.project

Configures the Google Compute project id under your google cloud platform account. Your Google Compute resources are managed within the project scope. To know more about Google Cloud projects you can check out the official documentation online at: https://cloud.google.com/compute/docs/projects.

compute.defaultzone

Configures the default zone of your Google Cloud project. Every project on Google Cloud Platform are associated with a default zone. For more information on default zone you can check out the official Google page : https://cloud.google.com/compute/docs/projects#default_region_and_zone.

compute.defaultdisk

Configures the default diskType to use for the google compute instances. The value for this property must always remain as pd-standard.

compute.operationTimeoutMilliseconds

Configures the default timeout in milliseconds for cloud instance operations. A new cloud instance startup or termination failing to complete within the given period is considered as a failure.

AWS CLOUD CONTROLLER CONFIGURATION SECTION

aws.defaultzone={default-region}
aws.operationTimeoutMilliseconds={operation-timeout}
aws.accessKey = {account-accessKey}
aws.accessSecret = {account-accessSecret}
aws.ec2KeyPairName = {keyPairName}
aws.ec2SecurityGroup={securityGroupName}
aws.defaultVPC={boolean}

aws.defaultzone

A default availability zone in a preferred region.this works as a fallback launch location for your instances if automatic availability zone evaluation fails in an occasion

aws.operationTimeoutMilliseconds

Maximum time allowed for completing a cloud instance operation before the operation times-out assuming a failure. Recommended value is 120000. The unit is milliseconds.

aws.accessKey

Your accessKey from the aws account credentials that you created earlier. [It is recommended to use IAM credentials instead of root ones.]

aws.accessSecret

Your accessSecret from the aws account credentials that you created earlier. [It is recommended to use IAM credentials instead of root ones.]

aws.ec2KeyPairName

Name of the public key you imported into your aws ec2 dashboard under “Key Pairs”. [Your key should be imported into every region that you wish to use.]

aws.ec2SecurityGroup

Name of security group you created earlier in your ec2 dashboard under Security Groups. [Your security group should be available in every region that you wish to use for launching an instance.]

aws.defaultVPC

Indicates whether the security group name mentioned in the ”aws.ec2SecurityGroup” parameter is associated with a default (true) or non-default (false) VPC. AWS platform has different requirements for launching an instance in a default VPc vs a non-default VPC. Hence it is important to indicate what type of VPC you are using.

Autoscaler SCALE POLICY SECTION - MINUTES

scalepolicy.store=scalingpolicies
scalepolicy.filescandelay=60

scalepolicy.store

Configures the directory name for storing Stream Manager scale policy files from the file system. This folder must exist inside {streammanager}/WEB-INF/ directory. Stream Manager loads policy files that are found inside this directory.

scalepolicy.filescandelay

Configures the time period - in minutes - in which Stream Manager reloads scale policy files. This can be useful if you wish to make changes to your scale policy file (default.json) located inside the {streammanager}/WEB-INF/{scalepolicy.store} directory without restarting the streammanager application.

NOTE: Do not edit the scale policy once your NodeGroups have one or more active servers. Changing scale policy after servers are active may lead to undesirable behavior.

LAUNCH CONFIGURATION SECTION - MINUTES

launchconfig.store=launchconfigs
launchconfig.filescandelay=60

launchconfig.store

Configures the directory name for storing launch configuration files. This folder must exist inside {streammanager}/WEB-INF/ directory. Stream Manager loads configuration files that are found inside this directory.

launchconfig.filescandelay

Configures the time period in minutes in which Stream Manager reloads configuration files from the file system. This can be useful if you wish to make changes to your launch configuration file(s) located inside the {streammanager}/WEB-INF/{launchconfig.store} directory without restarting the streammanager application.

NOTE: Do not edit the launch configurations once your NodeGroups have one or more active servers. Changing launch configuration after servers are active may lead to undesirable behavior.

AUTOSCALING MANAGEMENT SECTION

# Time needed in milliseconds for a scaled node to be effective at cluster - 3 x clusterPlugin Reporting Time

autoscale.scaleout.mode=competitive

autoscale.scaleout.mode

Configures the operation mode for autoscaling in scale-out operations. Autoscaler supports two modes of operation, the traditional(rigid) mode and the flexible competitive mode.

REST SECURITY SECTION

rest.administratorToken=xyz123

rest.administratorToken

Configures the administrator's security token for making administrative REST API calls. This protects your REST gateway from unauthorized use. The default value of this token is set to xyz123. Make sure to change this to a more secure alphanumeric string before using Red5 Pro Stream Manager for production.


Launch Configuration Properties

Properties given below can be configured in each launch configuration JSON file of Stream Manager. The default location for launch configuration files is: {streammanager}/WEB-INF/launchconfigs

sample launch configuration, default.json

{
  "launchconfig": {
    "name": "default",
    "description": "This is a default google compute instance configuration",
    "image": "red5pro",
    "instanceType": "n1-standard-2",
    "connectionCapacity": "1000",
    "properties": {
      "property": [
        {
          "name": "network",
          "value": "default"
        }
      ]
    },
    "metadata": {
      "meta": [
        {
          "key": "role",
          "value": "unassigned"
        }
      ]
    }
  }
}

name

Configures the name of a launch configuration. This should be same as the filename that the configuration is stored into.

description

Configures a brief description of your configuration.

image

Configures the Red5 Pro image name for a launch configuration. This is the image that will be used to launch a new Red5 Pro instance.

instanceType

Configures the cloud platform machine type for a launch configuration. This dictates the machine configuration to use for launching a new Red5 Pro instance. you can get more information about google compute machine types and their hardware configurations from https://cloud.google.com/compute/docs/machine-types

connectionCapacity

Configures the maximum estimated connection capacity of a Red5 Pro instance. This directly conveys the maximum number of users that can be connected to the instance at any given time. NOTE: Once the number of connections reaches the maximum configured amount, the server will no longer allow additional connections to the server until some connections have dropped.

properties

Configures an arbitrary number of properties for this launch configuration. This is not to be edited as it is reserved for development use.

metadata

Configures an arbitrary number of metadata items for this launch configuration. This is not to be edited as it is reserved for development use.


Scale Policy Properties

Properties given below can be configured in the default policy file - default.json - of Stream Manager. The default location for scale policy files is: {streammanager}/WEB-INF/scalepolicies

sample scale policy default.json

{
    "policy": {
        "name": "default",
        "description": "default scale policy",
        "version": "0.0.1",
        "type": "com.red5pro.autoscaling.policy.SimpleScaling",
        "origin": {
            "_required": "false"
        },
        "edge": {
            "coolDownPeriod": "180000",
            "estimatedWarmUpTime": "120000",
            "minLimit": "1",
            "maxLimit": "10",
            "scaleAdjustment": "1",
            "_required": "true"
        }
    }
}

name

Configures the name of a scale policy. This should be same as the filename that the policy is stored into.

NOTE: For the current version of Stream Manager there is only one scale policy file called default.json. This filename should not be changed.

description

Configures a brief description of your policy file.

version

Not to be edited. Reserved for development use only.

type

Not to be edited. Reserved for development use only.

coolDownPeriod

Configures the cooldown period - in milliseconds - for a scale policy. This is how long autoscaling ignores cluster alarms for after a immediate autoscale scale-out operation. This is evaluated as a result of:

instance warmup time + time taken to attach a edge to a origin

estimatedWarmUpTime

Configures the estimated time taken by an instance to startup (hardware & OS) + Red5 Pro service boot-up time.

minLimit

Not to be edited. Reserved for development use only.

maxLimit

Configures the maximum number of instances that Autoscaler is allowed to add to a cluster. This is useful if staying within a budget is more important than being able to provide service to every subscriber request.

scaleAdjustment

Not to be edited. Reserved for development use only.

_required

Not to be edited. Reserved for development use only.


Setting Up A New Cluster for Streaming

To start streaming operations you need to have a minimum of one active cluster (NodeGroup). As described before a cluster will have a minimum of one edge and a maximum of one origin at any given time for a conducting a successful streaming event.

To create a new cluster you need to have access to Stream Manager’s REST API gateway and the Red5 Pro Stream Manager API.

Following are the steps to setup a new cluster (NodeGroup) of Red5 Pro instances.

1. Create a new NodeGroup

Use the CREATE GROUP REST API call to create a new virtual group name (placeholder) which will define a collection of one origin and N edges. Note the group name from the rest response.

2. Launch a new origin in the Node Group

Use the LAUNCH NEW ORIGIN REST API call to start a new origin instance in the newly created group created in step 1.

  • The instance normally takes around 105 seconds (close to 2 mins) to be initialized and active. This includes time needed for the machine to startup and and time required for Red5 Pro service to be running.

After the origin is ready it contacts Stream Manager over HTTP to discover its Role (origin). At this point of time Stream Manager checks to see how many edges are there in the group. If there are no edges the Autoscaler component will initialize a new edge node for the group. The newly launched edge follows the same node lifecycle as the origin and requires about 2 mins to be ready for use. After the edge is ready it contacts stream manager over HTTP to discover a Role (edge). At this point of time Stream Manager groups the edge with the origin so that streams published to the origin can reach each edge for subscription.

Your cluster is now ready for use with one edge and one origin in it. Stream Manager will add more edges to the group using Autoscaler when it sees the traffic load increasing beyond the configured Threshold.

Once your NodeGroup is ready you can use it for publish and subscribe operations.


Consuming Services As a Broadcaster

Once you have an active cluster for streaming the next step is for a client publisher to start broadcasting to it.

The problem here is that the broadcaster may not know the IP address of the origin server that they need to broadcast stream to. This is where Stream Manager comes into play again.

Broadcaster Stream Request

Stream Manager provides a public REST endpoint for providing stream information to broadcasters and subscribers based on requester's role. The broadcaster client will need to make a REST call to Stream Manager using a scope name, stream name and the action parameter (broadcast).

  • Scope name is the Red5 application / context name that a client connects to. The default scope name to be used is live
  • Stream name is the name of the publishing stream
  • Action is a query string parameter which defines the request type: broadcast or subscribe

Requesting Stream Broadcast Information

Use stream manager’s READ STREAM REST API to request for an origin server

The combination of scope name and stream name should be unique for each streaming event.

Request Format

http://localhost:8080/streammanager/api/1.0/event/live/<stream-name>?action=broadcast

Response Format

{
  "name": "<stream-name>",
  "scope": "/live",
  "serverAddress": "<origin-host>",
  "availabilityZone": "<availability-zone-code>"
}

Connecting To Server To Broadcast

Having received a successful response from stream manager with stream broadcast information we can now publish the stream using a subscriber client.

The stream publish information received from the REST response can be used by different types of publisher client(s) as shown below.


Android Client

Propety Name Value
host
context live
port 8554
stream

iOS Client

Propety Name Value
host
context live
port 8554
stream

Flash Client

Propety Name Value
Connection URL rtmp://:1935/live
stream

Consuming Services As a Subscriber

If you are coming from the previous section then you have a working Red5 Pro cluster in place with at least one stream publishing to it.

Now that the stream is publishing and available subscriber clients may be interested to subscribe to it. However they don't know the host address of the edge that they should subscribe to.

Consuming Services As A Subscriber

Once again Stream Manager comes to the rescue by providing stream information to the subscriber client via REST API call. The subscriber client will need to make a READ STREAM REST call to stream Manager using a scope name, stream name and the action parameter (subscribe).

  • Scope name is the Red5 application / context name that a client connects to. The default scope name to be used is live
  • Stream name is the name of the publishing stream
  • Action is a query string parameter which defines the request type: broadcast or subscribe

Requesting Stream Subscribe Information

Use stream manager's READ STREAM REST API to request for an edge server.

The combination of scope name and stream name should be unique for each streaming event.

Request Format http://<streammanager-ip>:5080/streammanager/api/1.0/event/live/<stream-name>?action=subscribe

Response Format

{
  "name": "<stream-name>",
  "scope": "/live",
  "serverAddress": "<edge-host>",
  "region": "<origin-compute-region-code>"
}

The origin server hosts a Java servlet for providing cluster information which is exposed at the uri: http://<origin-ip>:5080/cluster . Stream Manager internally requests clustering info from the origin and provides the requester with the edge address to use in the REST response.

Additional Information on Red5 Pro Clusters: https://www.red5pro.com/docs/server/clusters/


Connecting To Server To Subscribe

Having received a successful response from Stream Manager with stream subscribe information we can now consume the stream using a subscriber client.

The stream subscribe information received from the REST response can be used by different types of subscribing client(s) as shown below.


Android Client

Propety Name Value
host
context live
port 8554
stream

iOS Client

Propety Name Value
host
context live
port 8554
stream

Flash Client

Propety Name Value
Connection URL rtmp://:1935/live
stream

Notes

  • You cannot make a subscribe request for a stream if it is not publishing
  • If a stream stops publishing the subscribers must re-approach Stream Manager for stream details

Additional Documentation

The following documents also refer to Autoscaling and the Red5 Pro Stream Manager:

> Autoscaling Streams with Red5 Pro

> Red5 Pro Stream Manager API

> Deploying Stream Manager and Autoscaling on Google Cloud Compute

> Deploying Stream Manager and Autoscaling on Amazon Web Services (AWS)