/

Troubleshooting


  1. Simulated cloud uses a local sqlite database to store manage instance provisions. This file is generated in the WEB-INF folder under a directory named by the the controller, ie: RED5_HOME/webapps/streammanager/WEB-INF/Simulated-Cloud-Platform/hostednodes.sqlite. If needed, this file can be edited with any sqllite editor such as the SQLite Manager - Firefox addon.
  2. Instances need to have the Red5 Pro service running before they can be added to the simulated cloud environment via the Provisioning API.
  3. By default when a node is added it is considered to be in an available state in the provision pool. When Stream Manager requests a new instance, the controller will provide a node from this list.
  4. When Stream Manager scales down an instance, it is automatically reset and added back to the provision list for reuse.
  5. When Stream Manager requests deletion of a node, the controller checks the managed.recycleDeadNode flag setting. If it is set to true the controller tries to reprovision the instance if it can (if the node service is reachable). If the node cannot be reprovisioned, it is set to a TERMINATED state.
  6. To reuse a TERMINATED instance, you must manually reboot the remote node server and then use the RESET api call to set the TERMINATED node into a available RUNNING node for use.
  7. Stream Manager logs a warning message when a node is trying to ping it but does not exist in the system yet. With the Simulated Cloud Environment you may see a lot of this because the instances are already running but may not be in use by Stream Manager.
  8. VERY IMPORTANT: If you have more than one autoscaling environment running, it is essential that the instancecontroller.instanceNamePrefix= property is set to a completely unique value for each environment, or the first stream manager may terminate the nodes belonging to the second group. For example, if you have one environment with instancecontroller.instanceNamePrefix=node and another one with instancecontroller.instanceNamePrefix=node123, the node123 instances will be terminated by the first stream manager. This occurs because it identifies them as starting with node and therefore as instances it should be managing, and which are not present it its database. Find the instancecontroller.instanceNamePrefix= in the {red5pro}/webapps/streammanager/WEB-INF/red5-web.properties, and edit it accordingly.
  9. If you have created a new nodegroup, but adding an Origin server doesn't automatically initiate an accompanying Edge server, check the following on your server instance:
  10. Verify that red5pro/conf/autoscale.xml in your Red5Pro server image points to the IP address of your stream manager (replace 0.0.0.0 in <property name="cloudWatchHost" value="http://0.0.0.0:5080/streammanager/cloudwatch"/>); make sure that you the change active value from default “false” to “true” (<property name="active" value="true"/>)
  11. Check {red5pro}/conf/cluster.xml - make sure that the password set for <!-- edge/origin link cluster password --> matches the cluster.password= in the stream manager red5-web.properties file.
  12. Make sure that your have your LICENSE.KEY file in the {red5pro} root directory; clustering (and in turn, autoscaling) will not work without a valid license key.
  13. If all of these check out, then start up red5pro service on your ami instance, and tail the Stream Manager {red5pro}/log/red5.log file - you should see messages about an instance pinging the Stream Manager. If you don't, then check for inbound port access to your stream manager (or outbound port access from your node security group).
  14. If you have any issues with, or are updating anything in your environment, you should restart your instances in the following order:
  15. 1 - Restart the ORIGIN server(s) first
  16. 2 - Then, restart the EDGE server(s)
  17. 3 - Finally, restart the STREAM MANAGER last.