Their granularity, deployment speed, and data traffic volume require new approaches to securing container environments. Credit: StockSnap Containers are a small, fast, and easy-to-set-up way to deploy and run software across different computing environments. By holding an application’s complete runtime environment, including libraries, binaries, and configuration files, platform and infrastructure are abstracted, allowing the application to run more or less anywhere. Containers are available from all the major cloud providers as well as in on-premises data centers and hybrid clouds. Plus, they can save companies a lot of money.Using containers, developers can create “microservices,” which are essentially small, reusable components of an application. Because they are reusable, microservices can save developers time, and they are deployable across different platforms.It’s no surprise, then, that container adoption is high. Unfortunately, security is still learning how they work and how best to lock them down. Around 80 percent of organizations with more than 500 employees now use containers, according to a recent McAfee survey of 1,500 global IT professionals. Only 66 percent have a security strategy for the containers. In fact, containers are now tied with mobile devices as the biggest security challenge for organizations, according to a March survey of 1,200 IT decision makers by CyberEdge.There are multiple reasons why security is a challenge in the container universe. One is the speed at which containers are deployed. Another is that containers typically require applications to be broken into smaller services, resulting in increased data traffic and complex access control rules. Finally, containers often run in cloud-based environments, such as Amazon, with new kinds of security controls. The ecosystem of container security tools is not yet mature, according to Ali Golshan, cofounder and CTO at StackRox, a Mountain View-based cloud security vendor. “It’s like the early days of virtual machines and cloud,” he says. “Organizations need to build proprietary tools and infrastructure to make it work, and it needs a lot of resources to implement. There are not a lot of ready-made solutions out there, and not enough solutions to cover all the use cases.”The life of a container is poorly managed and shortThe traditional software development process — build, test, deploy — quickly becomes irrelevant in the age of containers. In fact, developers often grab ready-to-use images from public repositories and throw them up into the cloud. “There’s some implicit level of trust there that may or may not be warranted,” says Robert Huber, chief security and strategy officer at Eastwind Networks. A container image is a convenient packaging of ready-to-go code, but providers might not have the time or interest in monitoring for security issues or publishing release notes, he says.“Ideally, you have a process to check the versioning, but I haven’t seen any organization that does that,” Huber says. “Companies should continuously check that the latest versions of the containers are the ones that are being used, and that all the code is patched and up to date. But right now, it comes down to the developer, and a manual check. I do believe that organizations will move to some process that’s more automated, but right now there’s a gap. It’s fire and forget it. You pull a container, run it, and you’re done.”It’s not much better when developers build their own containers. The speed of development means that there’s no time for quality assurance or security testing. By the time someone notices that the containers are there, they’ve done their job and are gone.“The lifecycle might be over by the time the security team can go in,” says Bo Lane, head of solution architecture at Kudelski Security. “That’s the challenge, and it requires a different mindset for security.”Security awareness needs to be built in early in the development process, he says, and automated as much as possible. For example, if developers are downloading an image from an external source, it needs to be scanned for vulnerabilities, unpatched code, and other potential issues before the container goes live. “And once that container goes live, how do they maintain and monitor the state of its security for something that’s potentially very short lived, and interacts with other components?” he asks.Take for example, Skyhigh Networks. The cloud security vendor has its own cloud services offerings, so it is dealing with all these challenges, says Sekhar Sarukkai, co-founder of Skyhigh Networks and VP of engineering for McAfee Cloud, which acquired Skyhigh earlier this year. “We are deploying the latest architecture stacks, we have microservices,” he says. “In fact, we can deploy into production multiple times a day. Traditionally, you’d have security testing or penetration testing — that doesn’t work in a DevOps environment.”Enterprises have to find ways to automate a lot of these functions, he says. That means being able to identify all the containers that are being deployed, make sure all their elements are safe, that they’re being deployed into a secure environment with application controls or application whitelisting, and then follow up with continuous monitoring.McAfee now has a product that does just that, announced in April at the RSA conference — the McAfee Cloud Workload Security platform. “It secures Docker containers and workloads in those containers in both public and private cloud environments,” says Sarukkai. That includes AWS, Azure and VMWare. “It’s the first, I think, cloud workload solution that can quarantine infected workloads and containers,” he says.The product can also reduce configuration risks, by checking for, say, unnecessary administrator privileges, or unmet encryption requirements — or even AWS buckets that are set to be publicly readable. “It also increases the speed at which you can remediate,” he says. “It can improve it by as much as 90 percent, from the studies that we’ve done with our customers.” Almost all of the container security issues he’s seen so far, he says, are because they weren’t configured correctly. “I think that’s where the biggest risk lies,” he says.A massive web of servicesConfiguration management and patch management are difficult to do, and easy for attackers to exploit, but they are solvable issues. A more daunting challenge is that of the complexity created by breaking an application into a large number of smaller, interconnected services.With traditional, monolithic applications, there’s one service and just a couple of ports. “You know exactly where the bad guys are going to try and get in,” says Antony Edwards, CTO at Eggplant.That makes it easier to secure, he says. “However with microservices, you have lots of services and often many ports, so that means there are many more doors to secure. Plus, each door has less information about what’s going on, so it’s harder to identify if someone is a bad guy.”That puts the burden on ensuring that the security of the individual services is as tight as can be, he says, with principles such as least privilege, tight access controls, isolation, and auditing. “All this stuff has been around since the 1970s; we now just need to do it,” Edwards says.That’s easier says than done. “Organizations are breaking their monoliths into smaller and smaller chunks, and the data flows get so much more complex within the application that it gets hard to tell what every microservice does,” says Manish Gupta, co-founder and CEO at ShiftLeft.If there’s a hard-coded access credential in the mix, or an authentication token that’s being leaked, the entire system becomes vulnerable. “This is a really big issue, and people don’t recognize how big of a problem this is,” says Gupta.The problem is only getting bigger, he added, as more critical systems are moved to a software-as-a-service delivery model. “That means you are concentrating a lot of your data in your apps — Equifax is a great example; Uber is a great example,” he says. “Now this very sensitive, important data is flowing between microservices, and few people have good visibility into it.”Leaky containers create vulnerabilitiesThere’s another potential security challenge with containers. They run in a shared environment, which is particularly worrisome in public clouds, where customers don’t know who their neighbors are. In fact, vulnerabilities in Docker and Kubernetes container management systems have been discovered over the past couple of years.Companies running containers in a public cloud are starting the recognize this issue. “With most of the customers I speak with, they ask directly about what are the tools available to isolate the host from container escape and isolate containers from each other,” says Kirsten Newcomer, senior principal product manager for Red Hat’s container platform, OpenShift.More than 70 percent of respondents run their containers on Linux, according to Portworx’s 2017 container adoption survey. Features that administrators can use to make sure that containers stay isolated include making use of Linux namespaces and using Security Enhanced Linux for an additional layer of mandatory access controls, says Newcomer. “And then there’s something called Linux Capabilities, which allows you to limit the different kinds of privileges within a Linux system that a process has access to.”These may be familiar concepts to Linux security experts, but they might be new to teams deploying containers — or to organizations that recently moved over from Windows. At least companies running their own container environments, whether on public or private clouds, have full control over these security settings. When they’re using off-the-shelf containers, they have to trust the cloud provider to get the underlying security infrastructure right.So far, none of the vulnerabilities that allow processes to escape containers have resulted in a major public breach. The fact that the space is dominated by just a handful of platforms — Docker and Kubernetes being the big names here — means that a single vulnerability can have very broad impact if attackers exploit it quickly, so it pays to be prepared. Related content brandpost How an integrated platform approach improves OT security By Richard Springer Sep 26, 2023 5 mins Security news Teachers urged to enter schoolgirls into UK’s flagship cybersecurity contest CyberFirst Girls aims to introduce girls to cybersecurity, increase diversity, and address the much-maligned skills shortage in the sector. By Michael Hill Sep 26, 2023 4 mins Back to School Education Industry IT Training news CREST, IASME to deliver UK NCSC’s Cyber Incident Exercising scheme CIE scheme aims to help organisations find quality service providers that can advise and support them in practising cyber incident response plans. By Michael Hill Sep 26, 2023 3 mins IT Governance Frameworks Incident Response Data and Information Security news Baffle releases encryption solution to secure data for generative AI Solution uses the advanced encryption standard algorithm to encrypt sensitive data throughout the generative AI pipeline. By Michael Hill Sep 26, 2023 3 mins Encryption Generative AI Data and Information Security Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe