Why you Should use a Blue-Green Deployment Strategy

Deployment
SHARE

What is the best approach to setting up your testing environment? You want to conduct a full test of all your features, confirm expected behavior, and identify any bugs before releasing your application or updates live to your consumers. Poor testing can lead to a poor user experience which can have a disastrous impact on… Continue reading Why you Should use a Blue-Green Deployment Strategy

What is the best approach to setting up your testing environment? You want to conduct a full test of all your features, confirm expected behavior, and identify any bugs before releasing your application or updates live to your consumers. Poor testing can lead to a poor user experience which can have a disastrous impact on the longevity and profitability of your application.

However, if you already have an application running, you can’t just shut it off every time you want to add a new feature. This is where a Blue-Green deployment comes into play.

The structure for a Blue-Green deployment employs two identical environments: Production and QA (quality assurance). Production and QA are just the terms we used, and there are plenty of perfectly acceptable variations on these names. Using “Staging” as the name for your QA environment for example is quite common.

The Production environment serves the users of your system. This is the server infrastructure that your customers and clients connect to. Separate from Production is the QA environment, this allows you the flexibility to develop new versions of your software, optimize or streamline code, or add new features. Since this QA environment is not connected to your users in any way, you can fully test, break your application, and find the edges of its capabilities without any service interruption. After the initial round of development, the new update is deployed in the QA environment and undergoes testing until the QA process is complete. It will then be ready for a public release.

At this point, the traffic can be routed from the Production environment to the QA environment, and the two environments eventually swap roles. The former Production environment becomes the new QA environment and vice-versa.

Let’s dig deeper into how that transition works using a live video streaming example leveraging autoscaled Red5 Pro clusters. In this case, we’ve setup two almost identical environments with DNS configured to point to two different domain names to them. Let’s call those domains production.red5.org and qa.red5.org. To be specific to Red5 Pro these two domains are pointing to the Stream Managers in the respective environments.

Let’s now say that you’ve fully tested your changes to the QA environment, and you are ready to deploy them to production. Upon switching environments all incoming traffic must be rerouted to the QA environment. We do this by flipping where the production DNS points. Once the DNS has propagated to point to the updated environment, new connections will point to the updated code, and the existing sessions will eventually terminate on the old production environment. As this happens, the front end will update so that new publish and subscribe requests are dispatched to the QA environment. Eventually, only the QA environment will have ongoing sessions, thus making it the new production environment. At this point, it is safe to take down the old Production environment so it may be used as a new QA environment for future development.

Figure 1: Simplified diagram showing the transition from the current production environment to the QA environment which will become the new production environment.


This seamless transition of a Blue-Green deployment means your users will switch over to the new version without any interruptions. Besides the actual updates and new features themselves, they won’t really see a difference.

Blue-Green deployment has other advantages as well: greater reliability and lower downtimes. Moreover, when the QA environment is not used to test new functionalities, it can be used as an exact backup copy of the production environment. That way, if anything happens to the production environment the users can be re-routed to the QA environment.

Now that we have a fundamental understanding of the blue-green deployment strategy, we can now dive deeper into how this applies to Red5 Pro installations.

Let’s examine a common setup for Red5 Pro: an Autoscale setup with a load balancer in front of the Stream Manager(s). When installing the latest upgrade, a secondary autoscale cluster installation is created that might also include all of the application’s custom code. The secondary cluster will be the Blue environment and the production environment will be the Green environment. Blue will have a dedicated load balancer with a unique domain name, and its DNS configured to point to its own dedicated stream manager. The update will then be installed on the Blue version updating the Stream Manager(s), Origin(s), Relay(s), and Edge(s).

Once that is completed, the client-side code is pointed to the Blue domain, and it will automatically start using the new system. Of course, if the Red5 Pro Mobile SDK has also released an update, then the corresponding mobile app may need to be separately updated as well.

The duplicate server-side environments ensure that the original system (Green) continues to run untouched, while the secondary system (Blue) is tested for any irregularities. Once all the bugs–if any–have been addressed and everything working as expected.  The primary DNS is pointed to the Blue Load balancer, which in turn points to the updated Stream Manager. At that stage, it is safe to shut down the now deprecated system. The old Blue environment is then the new Green.

The process is even easier for apps without autoscaling as there will be only a single server deployed. The new version of Red5 Pro is installed on a single instance leaving the production version unaffected. Then the client-side code is changed to point to the updated server for testing. Once it looks right, the production server is then updated with the latest code.

For more on upgrading with specific cloud providers, and upgrading autoscaled Red5 Pro with the Stream Manager, please follow our documentation:

A Note on Stream Managers and Load Balancers:

As you can see from the Blue-Green deployment strategy outlined above, using a load balancer with an application is extremely useful. If you currently don’t use one, you may want to as this will make the updating process much much easier. Since the Load Balancer sits in front of the Stream Manager, as soon as you turn off your production environment, your clients will automatically be connected to the new Stream Manager of the upgraded system.

Thus the Blue-Green deployment strategy is a very useful way to develop an application. If you have any questions about setting up one for yourself, please send a message to info@red5.net or schedule a call.