Linux container technology has evolved rapidly over the past year as adoption expands beyond large web companies to become the de facto way organizations are building distributed applications today. The technology has become more sophisticated to support multi-container, multi-host applications, and has even expanded beyond Linux to the Windows architecture, says Marianna Tessel, Senior Vice President of Engineering at Docker.
Docker, too, has evolved to meet its customers needs through both its commercial and open source projects.
“Docker containers initially started out as a developer tool and have evolved to incorporate the features and capabilities users need to deploy container technology in production,” Tessel said.
Docker is also now participating in the Open Container Project, a Linux Foundation Collaborative Project to create open industry standards around container formats and runtimes.
What’s next for container technology? Tessel will present her view in a keynote session at LinuxCon, CloudOpen and ContainerCon North America, Aug. 17-19, 2015. Here she discusses container technology as it exists today, how it has changed, and the role that the Open Container Project will play in advancing container technology in the coming months and years.
Linux.com: What is the state of container technology today? Where is it succeeding and what are its challenges?
Marianna Tessel: In a short period of time, container technology has rapidly evolved to affect the way users and companies build, ship and run distributed applications. Containers have transformed the capabilities of developers and the companies that they work for – increasing productivity while reducing cost.
To give a couple of examples: Companies like ING are able to move faster through its development pipeline using Docker. In ING’s case, it went from a monolithic application with code changes into production measured in months, to 300 changes a day that go from code commit to production in 15 minutes. Other organizations are using container technology to streamline their legacy application architectures into a more agile microservices environment. Booz Allen is working with a large federal agency to create a secure DevOps framework for application development teams as they evolve legacy applications into distributed applications running in the cloud. These applications are used in managing the government-wide systems for those who award, administer, or receive federal financial assistance contracts and intergovernmental transactions. To create a unified developer experience and provide a uniform set of tooling and shared content, this large government agency is using container technology to break up these applications into microservices.
The biggest challenge with container technology is probably the rapid rate of adoption. Uptake is faster than anyone could have imagined so it has required Docker’s ecosystem to evolve rapidly. Users and organizations want a way to maintain a seamless experience through the development lifecycle. As applications become sophisticated and containers more widely adopted, the ecosystem is evolving as well – offering more tooling and options such as networking, storage, monitoring, etc.
Is security still an issue for containers? Why or why not?
Tessel: It is not about securing the container, it is about securing the application. Container technology actually provides another layer of protection for applications by isolating the application from the host and between the applications themselves without using the incremental resources of the underlying infrastructure and by reducing the attack surface area of the host itself. Docker, for example, does this by leveraging and providing a usable interface to numerous security features in the Linux kernel. The security attributes of containers are well recognized and even banking institutions such as Capital One are containerizing some of their critical applications.
Security will continue to be a topic of innovation. As applications are continually changing, the best methods for securing the application will need to evolve as well. Docker is continuing to hone its security capabilities and techniques to evolve from developer tooling to more sophisticated solutions that operations teams use in production. Docker Notary is designed to serve as a filter for the distribution of containers and Docker-related content in a project, including and especially in the production phase. This way, only digitally signed content that has been entered into Notary’s registration system, gets passed into production. Organizations using containers also need to ensure that they are developing in accordance with industry best practice recommendations. The Docker Bench for Security tool is a helpful utility that automates validating a host’s configuration against the CIS Benchmark recommendations.
How has container technology changed over the past year?
Tessel: Container technology has evolved in both breadth and depth over the last year, becoming the de facto standard for organizations to build, ship and run distributed applications. Docker containers initially started out as a developer tool and have evolved to incorporate the features and capabilities users need to deploy container technology in production. Containers have become more sophisticated and widely-deployed, expanding from a technology capable of managing single container applications to one that handles multi-container, multi-host distributed applications. As result, the type of organizations using container technology has expanded beyond the bleeding edge web companies. We continue to see new use cases and usages, such as, “Container as a Service” and Big Data Analysis applications. Finally, one of the most significant changes in container technology is the multi-architecture expansion of containers beyond Linux and Solaris to also include Windows.
What role do you see the new Open Container Project playing in advancing container technology?
Tessel: Users can fully commit to container technologies today without worrying that their current choice of any particular infrastructure, cloud provider, DevOps tool, etc. will lock them into any technology vendor for the long run. With one common standard, users can focus on choosing the best tools to build the best applications they can. Equally important, they will benefit by having the industry focus on innovating and competing at the levels that truly make a difference. Ultimately, the OCP will ensure that the original promise of containerization —portability, interoperability, and agility—aren’t lost as we move to a world of applications built from multiple containers run using a diverse set of tools across a diverse set of infrastructures.
How will Docker contribute to the new collaborative project?
Tessel: Docker is donating both a draft specification for the base format and runtime and the code associated with a reference implementation of that specification, to the OCP. Docker has taken the entire contents of the libcontainer project (github/docker/libcontainer), including nsinit, and all modifications needed to make it run independently of Docker, and donated it to this effort. This codebase, called runC, can be found at github/opencontainers/runc. libcontainer will cease to operate as a separate project. Docker will also contribute maintainers to the effort alongside CoreOS, Red Hat, and Google, as well as two independent developers.
Marianna Tessel has over 20 years of experience in engineering and leadership, having worked for both large organizations and startups. She now runs the engineering organization at Docker, which actively contributes to the open source project and is also responsible for Docker’s commercial offerings. Before joining Docker, she was VP of engineering at VMware, having led a team of hundreds of engineers and was responsible for developing various VMware vSphere subsystems. She is known for catalyzing tremendous technology ecosystem growth and was included on the 2013 Business Insider Top 25 Most Powerful Women Engineers in Tech list.
Register now for LinuxCon North America, to be held Aug. 17-19, 2015 at the Sheraton Seattle.
- Dent Introduces Industry’s First End-to-End Networking Stack Designed for the Modern Distributed Enterprise Edge and Powered by Linux - 12月 17, 2020
- Open Mainframe Project Welcomes New Project Tessia, HCL Technologies and Red Hat to its Ecosystem - 12月 17, 2020
- New Open Source Contributor Report from Linux Foundation and Harvard Identifies Motivations and Opportunities for Improving Software Security - 12月 8, 2020