Server Virtualization and Control Contexts

Traditional database servers are relatively easy to track.  You stand up a physical box and place the database on it.  The part where a physical system is needed is monitored closely by business and change managers, due to costs and other constraints.  However, this constraint is typically missing from virtualized environments.  Because network infrastructure engineers can bring up a virtual server without much effort, they typically respond quickly to business or IS requests for additional server resources.  Risk due to virtualization is easily managed with a little planning, a few processes and policies, and a network segmentation plan which enables engineers to ensure data security without introducing another layer of complexity.  The result is a set of control contexts into which database servers are placed based on the classification of the data they store or process.

Control Context Defined

The term “security context” is typically used to describe the framework governing user or application authentication and authorization.  It is closely related to the framework of controls used to secure data in a datacenter, but not close enough.  This is where a control context fills the gap.  A control context is a collection of infrastructure controls which both harden and monitor critical resources and the paths leading to and from them.  To better understand this concept, let’s look at Figure 1. 

Figure 1

VLAN 1 in this example has no special controls in place.  Standard application and operating system access controls apply.  Server hardening is accomplished manually during setup.  No VLAN Access Control List (VACL) exists for that segment.  This set of conditions, and those existing in the wider context of the enterprise network, constitutes the control context for VLAN 1. 

The control context for VLAN 2 is much different, and consists of the following:

  1. An VACL is configured in the Enterprise Switch which allows passage of packets only from approved servers or end-user devices. 
  2. An inline threat management device acts as a gateway, monitoring for network or packet anomalies.  It also provides extrusion prevention capabilities.
  3. All unstructured data is filtered and indexed to easily comply with discovery requests or place documents on legal hold.
  4. Each server in this network segment must be placed into an Active Directory OU designed for the server category into which it falls.  For example, database servers are placed into a DB server OU, and application servers are placed into an application server OU.  A group policy object (GPO), which applies appropriate security templates, is attached to each OU.  This enforces secure configuration rules.
  5. Finally, each database server is monitored to ensure proper access controls are maintained for direct access.  Any change to security configurations is reported and reviewed by security analysts.

There are other controls which might be appropriate.  But this list is “good enough,” providing insight into the differences between a weak control context and one designed to protect sensitive data.  The concept of control context is a good fit for virtualized data centers.

Control Context and Virtualization

One of the advantages of virtualization is the ability to quickly bring up a copy of a production, QA, or development server.  Security and change management policies and processes shouldn’t be so strict that this benefit all but disappears.  This is where control contexts can help.

In a properly segmented datacenter, some segments will be more secure than others, depending on the data processed or stored there.  In other words, some segments will have stronger control contexts than others.  Any servers placed into a secure segment in accordance with documented implementation and change management processes should be as trustworthy as any other server in the segment.  This includes virtual servers.  So the key is to stop worrying about security on individual servers and focus on the control context in which a specific class of server will operate.

When using control contexts instead of individual server security configurations, some training is necessary.  Engineers must understand where a specific server type can or cannot be implemented.  In many cases, however, they will be throwing up a copy of an existing server for QA, Development, or service recovery testing.  When these servers contain PII or ePHI, the engineers can simply bring them up within the same VLAN as the original server.  Because both servers will operate within the same control context, the level of risk will be commensurate with management’s expectations. 

The Final Word

One of the biggest concerns facing security managers is controlling the spread of virtual servers.  We can suit up and engage in battle, performing a plethora of server-level vulnerability tests whenever engineers submit a change request to bring up a new server; or we can make everyone’s job easier with a virtualization-friendly environment using control contexts.  I vote for easier…

Copyright © 2009 IDG Communications, Inc.

Make your voice heard. Share your experience in CSO's Security Priorities Study.