Cloud disaster recovery: Can you trust your provider?

There are thousands of service-level products and their major features within a cloud application stack. How can you be sure it will recover from a business interruption or disaster?

As a information security executive, what are your concerns related to disaster recovery and business continuity of your cloud applications? In Organizing sensitive data in the cloud, I mention configuration information for each cloud service layer (software, platform, infrastructure, and security) needs to be kept in a directory. I have a significant concern though. Today, there are hundreds to thousands of permutations for vendors product configurations that may be deployed in the cloud. The sheer number of features supported for each product are mind-numbing.

See also: Small clouds: Security selection criteria

This makes disaster recovery and business continuity a nightmare. Only financial services companies invest the money necessary to replicate the applications and core infrastructure to ensure that a disaster can be effectively handled. This is too expensive for many small and medium sized corporations. What is the key to disaster recovery success? The cloud provider needs to minimize the number of product vendors and the corresponding features they deploy. This reduces the number of permutations that must be tested. Hence, a cloud user can have assurance that the cloud provider's web solution will work for them.

I'll examine a cloud application scenario. How should the directory be designed to assist in deploying a cloud based application? A cloud application is supported by a web server that interfaces with a database which runs on an operating system contained within a virtual machine. The virtual machine acquires the network and storage resources it needs to support the application. The flavors of virtualized networking products and storage components also need templates associated with them.

It should be noted that the case for virtualization layer services will greatly increase. This is due to the fact that more applications can share greater portions of the ever-growing processor capacity, slices of network bandwidth (10 megabit to 40 megabit), and quantity and/or quality of storage (solid state drives with great performance or hard disk drives that are over 1 terabytes in size). So, the virtualization layer will be a ever more prominent piece of cloud computing as time continues.

Back to directory service modeling of a cloud application. I'll model web servers by using templates in the LDAP directory. Web server vendors can be compared to different types of cars, each one having a vendor template. Then I'll model web server supporting features (like a given car model) such as web session (HTTPS) encryption techniques, the necessary public key infrastructure, and information related to the number of maximum number of concurrent users that the web server needs to support. Common collections of critical web server features (like the car engine) may also need grouping.

The database, operating system, and virtual machines should also be modeled in a manner similar to the web server. Common prominent features should be grouped like those for the web server. This grouping is effective if only a limited number of permutations are necessary (like small, medium, and large). Also, a variety of templates need to be created for each type of storage and network device that is supported within the cloud provider.

Where do all of these templates come from? Each product vendor should submit XML definitions describing their service layer products and their major product features to a national/international cloud product repository. The cloud provider cherry picks the product vendors and the corresponding features they want to deploy. Then, a configuration program can parse the XML definitions that are in the menu of provider solutions for a given service layer. Only five or less options should be supported for each product within a service layer. Then, the parser then populates the LDAP directory with all the templates and their relevant options.

A hierarchy from the LDAP directory is created showing the vendor and product features chosen for each service layer (software as a service, platform as a service, infrastructure as a service, and security as a service). This hierarchy is used to provision a given cloud application and its supporting functions. Each service layer component reads its configuration within the directory and self-initiates. Then initial testing occurs to determine if the application is satisfactorily meeting customer expectations. If not, another permutation is selected, provisioned, and tested. This continues until customer satisfaction of both price and product performance.

In summary, disaster recovery and business continuity are difficult to address due to the ever expanding choice of service layer products and features. So, product choices and features need to be limited. Then a limited number of XML schemas associated with vendor products and features is selected to serve the cloud user. The directory is populated via the parsing of the XML data. It then uses the directory templates to provision the software as a service, platform as a service, infrastructure as a service, and security as a service products. More provisioning using other directory models may need testing to right-size the application. This leads to a scenario where disaster recovery is reasonable. This is due to the fact that a limited number of deployment permutations are supported, thereby enabling thorough testing and future recovery for cloud users.

Insider: How a good CSO confronts inevitable bad news
Join the discussion
Be the first to comment on this article. Our Commenting Policies