I get it. I really do. Containers let data-center and cloud administrators put two to three times more server instances on a given server than they can with virtual machines. That means fewer servers, which means less power usage, which equals -- Ka-ching! -- less spending on your IT budget. What's not to like?
Well, ahem, you see there's this little, tiny problem. It’s unclear just how secure containers are, and there is certainly not much agreement on how to secure them or who will take that on.
As Libvirt lead developer, Daniel Berrange said in 2011 about the popular LXC container technology:
"The DAC (discretionary access control) system on which LXC currently relies for all security is known to be incomplete and so it is entirely possible to accidentally/intentionally break out of the container and/or impose a DOS attack on the host OS. Repeat after me "LXC is not yet secure. If I want real security I will use KVM."
That was then. This is now.
Things are better. For example, most modern container technologies can make use of Linux's built-in security tools such as AppArmor, SELinux and Seccomp policies; Grsecurity; Control groups (cgroups); and Kernel namespaces.
Wow! With all those security acronyms don't you feel safe? You shouldn't. Here's why.
Docker and other container technologies run as root
I quote directly from the security documentation from the most popular of all container technologies, Docker: "Running containers (and applications) with Docker implies running the Docker daemon. This daemon currently requires root privileges.
Yes, that's right, Docker, and many other container technologies, need root access to do their magic. And, like any other program that needs root access, with great power comes great opportunities to wreak havoc.
As Aaron Cois, a researcher at CERT at Carnegie Mellon University recently told the DevOps publication, "One of the biggest threats I see with Docker is its positioning and the implied security in the language. The reality is that these containers don’t contain anything." With root access that's indeed the case.
Sure, you can, as Docker suggests, restrict the ability to use Docker to "trusted users," but even trusted users have a terrible habit of using bad passwords. It's all too easy for trusted users to do untrustworthy things.
Heaping insult on injury, containers themselves don't need root access. Sure, servers and virtual machines (VMs) need root for SSH, cron, syslogd, and so on. Containers don't. Those jobs can be handled by the operating system hosting the container. Therefore, if you lock down the container, simply getting root in a container isn't that bad.
Or, is it?
As Lenny Zeltzer, an NCR security expert, recently wrote, "since Docker doesn't provide each container with its own user namespace, there's no user ID isolation." A process process running as root (UID 0) in a container has root-level privileges on the underlying host when interacting with the kernel.
On second thought, never, ever run anything as root within a container. If you do, you'll just be asking for trouble.
Zeltzer added, "Docker isolates many aspects of the underlying host from an application running in a container without root privileges. However, this separation is not as strong as that of virtual machines, which run independent OS instances on top of a hypervisor without sharing the kernel with the underlying OS."
And, of course, if a hacker can access the underlying operating system there's that darn root access daemon waiting to cause trouble.
In addition, there are more ways to the daemon than from a container. Docker suggests that if you provision Docker containers using web services via an API, you should be extra careful about parameter checking. Since Docker, and other containers, are typically set up using Representational State Transfer (REST) application programming interfaces (APIs), that leaves a lot of potentially vulnerable attack surface for hackers. If you elect to do this, you must use secure-socket layer (SSL) web connections, and making the connection over a virtual private network (VPN) wouldn't be amiss either.
Given the Docker daemon's potential to be used for harm, it's also a good idea to run Docker instances on their own servers. The only other software that should run on those servers are SSH server and monitoring and logging programs such as Nagios Remote Plugin Extender (NRPE).
What software are you using in that container?
If you build all the software you use in your container you'll be as safe as your own security skillset will allow you to be. Unfortunately, many folks are already turning to getting their containers from container repositories. The trouble here is that you can’t know exactly what you’re getting. Is it really a MySQL container or is it a MySQL container with an SSH server waiting for orders from its black hat creator?
You don't know. These containers need to be vetted. You can't simply grab a GitHub container and run it without taking terrible risks. Using unknown containers is like… well, you can use your imagination about risky behaviors and the infections that might result.
Say you do get the container from a "trusted" source. Can you actually trust it then? Maybe. Maybe not.
Docker, for example, introduced digital signatures in Docker 1.3 to automatically verify the provenance and integrity of all Official Docker Repos. But don't get too excited. The company went on to say that "this feature is still [a] work in progress: for now, if an official image is corrupted or tampered with, Docker will issue a warning but will not prevent it from running. And non-official images are not verified either. This will change in future versions as we harden the code and iron out the inevitable usability quirks. Until then, please don’t rely on this feature for serious security." Today, it's still a work in progress.
Don't think I'm picking on Docker. With the exception of Solaris Zones, all container technologies share in these problems. Unfortunately, unless you're running Solaris, Zones isn't going to help you with migrating your server apps to containers.
Looking ahead
There's an old project management saying that you can have two out of three in these choices: fast, good or cheap. With containers, it's convenient, cheap, secure, says Matthew Garrett, Principal Security Engineer at CoreOS. Everyone who's not besotted by containers' charms knows this.
The question is "Who will do the work of making containers more secure?"
Garrett thinks that while modern container deployment tools make use of a number of kernel security features ... there's been something of a dearth of contributions from the companies who sell container-based services. Meaningful work here would include things like:
- Strong auditing and aggressive fuzzing of containers under realistic configurations
- Support for meaningful nesting of Linux Security Modules in namespaces
- Introspection of container state and (more difficult) the host OS itself in order to identify compromises
Since Garrett was recently hired by CoreOS, he'll be tackling these problems. As he went on to say:
These aren't easy jobs, but they're important, and I'm hoping that the lack of obvious development in areas like this is merely a symptom of the youth of the technology rather than a lack of meaningful desire to make things better. But until things improve, it's going to be far too easy to write containers off as a "convenient, cheap, secure: choose two" trade-off. That's not a winning strategy.
He's right, it's not easy. And, as he always wisely observed on another occasion, "It has been 0 days since the last significant security failure. It always will be."
True, but containers will be made more safe. They have to be.
It may, I'm sorry to say, take a disaster or two, before security becomes job one for the frantically evolving container technologies, but it will. Then, after the lessons of needing to do security right sink in, containers will finally be ready for production.
This story, "For containers, security is problem #1" was originally published by ITworld.