Containers are a small, fast, and easy-to-set-up way to deploy and run software across different computing environments. By holding an application\u2019s complete runtime environment, including libraries, binaries, and configuration files, platform and infrastructure are abstracted, allowing the application to run more or less anywhere. Containers are available from all the major cloud providers as well as in on-premises data centers and hybrid clouds. Plus, they can save companies a lot of money.Using containers, developers can create \u201cmicroservices,\u201d which are essentially small, reusable components of an application. Because they are reusable, microservices can save developers time, and they are deployable across different platforms.It\u2019s no surprise, then, that container adoption is high. Unfortunately, security is still learning how they work and how best to lock them down. Around 80 percent of organizations with more than 500 employees now use containers, according to a recent McAfee survey of 1,500 global IT professionals. Only 66 percent have a security strategy for the containers. In fact, containers are now tied with mobile devices as the biggest security challenge for organizations, according to a March survey of 1,200 IT decision makers by CyberEdge.There are multiple reasons why security is a challenge in the container universe. One is the speed at which containers are deployed. Another is that containers typically require applications to be broken into smaller services, resulting in increased data traffic and complex access control rules. Finally, containers often run in cloud-based environments, such as Amazon, with new kinds of security controls.The ecosystem of container security tools is not yet mature, according to Ali Golshan, cofounder and CTO at StackRox, a Mountain View-based cloud security vendor. "It's like the early days of virtual machines and cloud," he says. "Organizations need to build proprietary tools and infrastructure to make it work, and it needs a lot of resources to implement. There are not a lot of ready-made solutions out there, and not enough solutions to cover all the use cases."The life of a container is poorly managed and shortThe traditional software development process \u2014 build, test, deploy \u2014 quickly becomes irrelevant in the age of containers. In fact, developers often grab ready-to-use images from public repositories and throw them up into the cloud."There's some implicit level of trust there that may or may not be warranted," says Robert Huber, chief security and strategy officer at Eastwind Networks. A container image is a convenient packaging of ready-to-go code, but providers might not have the time or interest in monitoring for security issues or publishing release notes, he says."Ideally, you have a process to check the versioning, but I haven't seen any organization that does that," Huber says. "Companies should continuously check that the latest versions of the containers are the ones that are being used, and that all the code is patched and up to date. But right now, it comes down to the developer, and a manual check. I do believe that organizations will move to some process that's more automated, but right now there's a gap. It's fire and forget it. You pull a container, run it, and you're done."It's not much better when developers build their own containers. The speed of development means that there's no time for quality assurance or security testing. By the time someone notices that the containers are there, they've done their job and are gone."The lifecycle might be over by the time the security team can go in," says Bo Lane, head of solution architecture at Kudelski Security. "That's the challenge, and it requires a different mindset for security."Security awareness needs to be built in early in the development process, he says, and automated as much as possible. For example, if developers are downloading an image from an external source, it needs to be scanned for vulnerabilities, unpatched code, and other potential issues before the container goes live. "And once that container goes live, how do they maintain and monitor the state of its security for something that's potentially very short lived, and interacts with other components?" he asks.Take for example, Skyhigh Networks. The cloud security vendor has its own cloud services offerings, so it is dealing with all these challenges, says Sekhar Sarukkai, co-founder of Skyhigh Networks and VP of engineering for McAfee Cloud, which acquired Skyhigh earlier this year."We are deploying the latest architecture stacks, we have microservices," he says. "In fact, we can deploy into production multiple times a day. Traditionally, you'd have security testing or penetration testing \u2014 that doesn't work in a DevOps environment."Enterprises have to find ways to automate a lot of these functions, he says. That means being able to identify all the containers that are being deployed, make sure all their elements are safe, that they're being deployed into a secure environment with application controls or application whitelisting, and then follow up with continuous monitoring.McAfee now has a product that does just that, announced in April at the RSA conference \u2014 the McAfee Cloud Workload Security platform. "It secures Docker containers and workloads in those containers in both public and private cloud environments," says Sarukkai. That includes AWS, Azure and VMWare. "It's the first, I think, cloud workload solution that can quarantine infected workloads and containers," he says.The product can also reduce configuration risks, by checking for, say, unnecessary administrator privileges, or unmet encryption requirements \u2014 or even AWS buckets that are set to be publicly readable. "It also increases the speed at which you can remediate," he says. "It can improve it by as much as 90 percent, from the studies that we've done with our customers."Almost all of the container security issues he's seen so far, he says, are because they weren't configured correctly. "I think that's where the biggest risk lies," he says.A massive web of servicesConfiguration management and patch management are difficult to do, and easy for attackers to exploit, but they are solvable issues. A more daunting challenge is that of the complexity created by breaking an application into a large number of smaller, interconnected services.With traditional, monolithic applications, there's one service and just a couple of ports. "You know exactly where the bad guys are going to try and get in," says Antony Edwards, CTO at Eggplant.That makes it easier to secure, he says. "However with microservices, you have lots of services and often many ports, so that means there are many more doors to secure. Plus, each door has less information about what\u2019s going on, so it\u2019s harder to identify if someone is a bad guy."That puts the burden on ensuring that the security of the individual services is as tight as can be, he says, with principles such as least privilege, tight access controls, isolation, and auditing. "All this stuff has been around since the 1970s; we now just need to do it," Edwards says.That's easier says than done. "Organizations are breaking their monoliths into smaller and smaller chunks, and the data flows get so much more complex within the application that it gets hard to tell what every microservice does," says Manish Gupta, co-founder and CEO at ShiftLeft.If there's a hard-coded access credential in the mix, or an authentication token that's being leaked, the entire system becomes vulnerable. "This is a really big issue, and people don't recognize how big of a problem this is," says Gupta.The problem is only getting bigger, he added, as more critical systems are moved to a software-as-a-service delivery model. "That means you are concentrating a lot of your data in your apps \u2014 Equifax is a great example; Uber is a great example," he says. "Now this very sensitive, important data is flowing between microservices, and few people have good visibility into it."Leaky containers create vulnerabilitiesThere's another potential security challenge with containers. They run in a shared environment, which is particularly worrisome in public clouds, where customers don't know who their neighbors are.\u00a0In fact, vulnerabilities in Docker and Kubernetes container management systems have been discovered over the past couple of years.Companies running containers in a public cloud are starting the recognize this issue. "With most of the customers I speak with, they ask directly about what are the tools available to isolate the host from container escape and isolate containers from each other," says Kirsten Newcomer, senior principal product manager for Red Hat's container platform, OpenShift.More than 70 percent of respondents run their containers on Linux, according to Portworx's 2017 container adoption survey. Features that administrators can use to make sure that containers stay isolated include making use of Linux namespaces and using Security Enhanced Linux for an additional layer of mandatory access controls, says Newcomer. "And then there's something called Linux Capabilities, which allows you to limit the different kinds of privileges within a Linux system that a process has access to."These may be familiar concepts to Linux security experts, but they might be new to teams deploying containers \u2014 or to organizations that recently moved over from Windows. At least companies running their own container environments, whether on public or private clouds, have full control over these security settings. When they're using off-the-shelf containers, they have to trust the cloud provider to get the underlying security infrastructure right.So far, none of the vulnerabilities that allow processes to escape containers have resulted in a major public breach. The fact that the space is dominated by just a handful of platforms \u2014 Docker and Kubernetes being the big names here \u2014 means that a single vulnerability can have very broad impact if attackers exploit it quickly, so it pays to be prepared.