Since its announcement in 2013, Docker has taken the IT world by storm. Major cloud providers, from Google to Amazon -- even Microsoft -- have announced and implemented support for container technologies. Like any other software, they have their risks and benefits. In this blog we'll investigate the benefits of containers as well as related open source vulnerabilities and how to mitigate them.
The Rise of Containers
Containers have conceptually been around for a long time -- dating back to logical partitions (LPAR) on mainframes, Solaris Containers, and FreeBSD jails. Linux Containers (LXC) were first released in 2008; their popularity has grown with the development of tools, such as Docker, which use them as a foundation.
Containers differ from virtualization schemes, such as VMware, in that they have a considerably lighter footprint. All of the containers running on a host share the same system kernel. This enables shorter start-up times -- often in milliseconds -- as well as smaller memory requirements. Containers are segregated from each other; each running instance uses copy-on-write layered file systems. This means that numerous instances of the same container require little disk space after the first instance.
Containers have become especially popular in two areas: development and treating running applications as services. It is easy to create a standard environment in a container with the tools and libraries needed. Instead of spending days configuring their system when on-boarding, new developers can be productive in minutes. Applications can be compiled into a container and the same container used in QA and production, eliminating the "but it worked in my environment" defense. Preloaded database containers ensure that a known state exists for repeatable and reproducible testing. Evaluating new tools is easy, too. Spin up a container and, once finished, there are no artifacts left on a file system. By running applications within containers, they are isolated, eliminating issues of conflicting library versions, port conflicts, and operating system versions.
The Docker Hub is a repository of containers -- both official and contributed. The official containers are authoritative and blessed by individual projects. There are official OS containers such as Ubuntu or CentOS, as well as those for applications such as MySQL, Wordpress, Redis, or Java. Users and developers can upload containers which they've created. Once a container is on the Docker Hub, it can be used by anyone. Companies can also run their own private repositories -- these can store the results of continuous integration to deploy on their system or as a local cache.
Container Security Risks
In 2014, the discovery of Heartbleed and Shellshock brought to light vulnerabilities in core functionality. All sorts of assumptions needed to be re-evaluated as servers were patched across the Internet. Containers, like any other system, can be vulnerable when a key strength - the unchanging layered file system - turns into a liability - when inherited layers contain these or other vulnerabilities.
In March of 2015, a report that over 30% of the "official" images in the Docker Hub contained high priority security vulnerabilities was issued by Banyan, an IT operations company. Shortly thereafter, Jérôme Petazzoni, a Docker employee, wrote an insightful article about analyzing vulnerabilities with Docker images. While the claim, in and of itself, was valid, it did not tell the full story. While a large percentage were old images, there exist images which were deliberately not updated. Docker has identified images which have known security vulnerabilities, but feels that they may still have value -- either frozen for compatibility issues, repeatable builds, or to reproduce bugs within a sand-boxed environment. Moreover, if an application image is cryptographically signed for PCI or other compliance, modifying the build will necessitate a new cycle of testing and validation.
Any software can contain risks; it is the responsibility of organizations to analyze whether the risks outweigh the benefits. Open source can make this easier, however it is important to be aware and understand what is running on your systems and stay abreast of vulnerability reports.
"Doveryai no proveryai" (trust, but verify) -- a Russian Proverb
- The Dockerfile, which defines a repeatable and reproducible method for creating images, can provide a measure of protection -- provided that proper discipline exists within the organization. However, it is not the source of truth -- it is possible for an image to be created without using a Dockerfile or to manually modify and commit an existing image.
- Using recent official images also helps to mitigate risk -- they are released and blessed by their representative projects. However this can remove repeatability and reproducibility from your applications -- requiring a cycle of testing.
- Running Docker containers on an isolated network might be an option for some organizations, but this can increase complexity and overhead.
- Scanning and identifying software within images can provide the most authoritative view of an image's vulnerability, but requires a commitment to perform the audit any time one of the image's component layers changes.
Know Thy System; Prevention is Ideal, but Detection is a Must -- Eric Cole
Containers are here to stay -- at least for the foreseeable future. While there are vulnerabilities to which they are susceptible, training and discipline go a long way towards their mitigation -- as they do for any other use of open source software. However, their popularity as well as support from analysts, such as Gartner, demonstrate that the benefits far outweigh the risks.
Learn best practices for securing containers at www.securecontainers.com.