Containers are amazing things — seriously. They speed up software deployment, which is critical in an era where good software makes winners and bad software makes losers.
Every business is a software business.
It’s absolute, but it’s not crazy. Over the past decade, software has rapidly become the primary driver of innovation and disruption across industries. And it’s not just the Ubers and Netflixes of the world driving software innovation as a differentiator. For example, the modern car has around 300 million lines of code in it. For comparison, Microsoft Office has about 35 million lines of code. Even in an industry as “old school” as automotive, software is a defining driver of who gets ahead and who falls behind.
So, we understand that delivering better software is the key to separating high- and low-performing businesses Yet, a recent report by Atlassian and xMatters noted that while 41 of its respondents say they do DevOps and 65 of that group says it produces benefits, a surprising 59 either didn’t know what DevOps was or if they were doing it. Not only are businesses unsure of how to achieve continuous software delivery, they’re actually anxious in this pursuit. Unsurprisingly, the degree to which releases cause anxiety is a key indicator of the deployment success rate.
It turns out containers are a great way to ease that anxiety. So, what are containers? What are the primary benefits of containers? And how they can fit into the pipeline?
Containers are essentially a scaled-down, executable piece of software that’s fully functional on its own. The code and its dependencies are packaged together and can run on a single host operating system. For developers, this ensures their software will work no matter where it’s deployed. A useful analogy is also semiliteral: shipping containers on a cargo ship. Imagine the difficulty in transporting materials if every item had a different sized and shaped container. With standardized containers, items are easier to move, can be stacked on top of one another and so on. Software containers apply a similar principle.
What’s this have to do with Agile, DevOps and Continuous Delivery?
How does this idea help a company build better software? Containers are a very uniform means of distributing an application with all of its dependencies and all of its environments. In the past, building a product without containers could be frustrating. If a developer wanted to build something on his machine, it would necessitate a Wiki visit. Maybe the Wiki is up to date, maybe it’s not. Dev might even have to chase down the last person who did the process to verify.
With containers, dev can build locally inside of the container without concern for environmental variables, like an OS patch. Ultimately, this creates very low friction and an efficient way of distributing software.
When considering the many container benefits, it’s often in comparison to a virtual machine (VM). In that context, speed at startup and shutdown is an advantage for containers. That’s because VMs don’t have to boot up or shut down the entire OS, but rather just a process. While insignificant in a single instance, multiplied by hundreds or thousands of cycles means valuable time savings. How long that specific process takes, though, is still up to the developer. If the app bootstraps in five seconds, then the instance will be up and running in five or six seconds. If the application takes two minutes to bootstrap, containers aren’t going to speed that up.
Other advantages of containers center on size and portability. Container images tend to be much smaller. Again, this means container images are quicker to transport and there’s less downtime waiting for items to ship around. Similarly, container portability is often better than that of other methods. For example, environment variables and port mappings can be done all in the container.
Ultimately, all of these things improve environment fidelity throughout the whole pipeline. Other benefits for containers include cost savings on OS licenses and security advantages.
Implementing container technologies
One of the best ways to use containers at scale, with redundancy and high availability, is with a container orchestration platform. A container orchestration platform makes configuration much easier. It can help auto scale as needed, set up a registry and, essentially, act as lifecycle management for the containers.
Where are data and state stored in a container orchestration platform though? SQL can run in a container, but it’s written to the file system inside the container. Shut that container down and the data is gone. To fix this, one must mount an external volume, use iSCSI, or use some of the persistent storage mechanisms that the orchestration tools provide. So, the orchestration platforms are key when it comes to actually moving at scale. It’s the fundamental idea behind the concept of infrastructure as code.
Having outstanding software products and continuous software delivery is essential to competing in today’s business environment. Using containers is a sound strategy to move with confident velocity. The only way to win big is to move faster — faster to production, faster to fix issues and faster to innovation.
This article originally appeared on DevOps Agenda.
Latest posts by Avantika Mathur (see all)
- Go with the Flow: Microservices and Containers Done Right! - February 21, 2018
- Scripts, automation, architecture, DevOps – DOES17 had it all! - November 29, 2017
- Go with the Flow: The Self-Service Catalog - November 2, 2017