DockerCon Hackathon: Continuous Dockery

learn-about-docker-containers

Last year, team Electric Cloud participated in the first annual DockerCon Hackathon, and won as one of the top-three submissions. This year, Nikhil and I returned to a bigger and badder hackathon event, evidence of Docker’s massive growth.

How it works

40+ teams of 1-10 hackers spent 24 hours working on a project from scratch.  Categories for submission:

  1. Build, Ship and Run Cool Apps with Docker
  2. Management & Operations: Logging, Monitoring, UI / Kitematic, Developer tools, Deployment, CI / CD, Stats, etc
  3. Orchestration: Composition, Scheduling, Clustering, Service discovery, high Availability, Load Balancing, etc
  4. Security, Compliance, & Governance: Authorization, Provenance, Distribution, etc
  5. Resources: networking, storage API, etc

Everyone submitted a 2-minute video, and 10 teams were selected to present.  Of those presenting, the judges selected the top 3 as winners. 

Our plan

Electric Cloud exists to help people deliver better software faster.  We wanted to show how Docker fits in with other tools in the software delivery ecosystem.  Being experts in our own software, we decided to use Electric Cloud products to tie everything together – accelerating end-to-end Continuous Delivery, using:

  • ElectricFlow – an orchestration tool that acts as the single pane of glass from commit through production
  • ElectricAccelerator – an acceleration tool that dramatically speeds up builds and tests by distributing them across a cluster of CPUs

Last year’s entry focused on the Build stage of a continuous delivery pipeline.  This year, we focused on the Integration stage.

We built a deployment process that:

  1. Dynamically spins up a VM on either EC2 or OpenStack
  2. Runs Docker Bench for security tests
  3. Retrieves artifacts from Bintray and Docker Hub
  4. Stands up linked MySQL and Wildfly containers running the application
  5. Runs Selenium tests distributed across a cluster
  6. Pushes some statistics to a Dashing dashboard
  7. Then automatically tears down the VM if the tests are successful.
diagram

The deployment process and the various technologies involved

A LOT to accomplish in 24 hours! but we were up for the task – and with less-than-pretty version of this diagram chicken-scratched on a piece of paper, we got to work! 

What we built

We chose a sample web application called The Heat Clinic because it has a couple of moving parts (application server and database) making it a somewhat realistic example.  We started out by building the Continuous Delivery pipeline.

The continuous delivery pipeline defined in ElectricFlow

The continuous delivery pipeline defined in ElectricFlow

For this hackathon, we focused on the Integration stage.  Still, it’s important to know what the pipeline is – to make sure the automation pieces are reusable, and knowing how they’d be reused is key.  Having kept this in mind, everything we built can be plugged in to Production (or any other stage) with minimal effort.

The next step was modeling the application.  The Heat Clinic application has two tiers, one for the web application and one for the database.  Each of those tiers has a few different components (artifacts) – the Wildfly/MySQL containers from Docker Hub, the WAR file for the web application, configuration files, SQL initialization scripts, etc.  We defined the tiers, the components, and the processes to deploy or undeploy each of those components.

The application model defined in ElectricFlow

The application model defined in ElectricFlow

Next, we defined the deployment process that coordinates everything.  This process is closely aligned with the diagram shown earlier: spin up the dynamic environment, run the security tests, retrieve all the artifacts, stand up the containers (in the right order), run the Selenium tests, and tear down the environment if everything is successful.

The deployment process defined in ElectricFlow

The deployment process defined in ElectricFlow

The Selenium suite we put together took a long time to run, and we realized this is not uncommon for Selenium.  So we sped up the Selenium test suite by using ElectricAccelerator.  By distributing the 101 tests across just two 4-core VMs, Accelerator used its patented secret sauce to parallelize and run the tests on the individual cores, bringing the overall time down from >27 minutes to <4 minutes.  That’s 7 times faster with just 2 machines!  If we were to add more VMs to our cluster, we could bring that time down to <30 seconds.  That’s a whopping 60 times faster!

insight

Visualizing how ElectricAccelerator distributed the Selenium tests across a cluster

Finally, we put a pretty face on our work by pushing some key stats to Dashing – typically displayed on a TV screen so everyone has an “at a glance” view of the health of the system.

dashing

Dashing dashboard showing key statistics

Our submission

While we did not win this time around, we did come out with a very cool story and a working set of integrations highlighting Docker in the context of Continuous Delivery.  Here are the pain points we looked to address:

  • You’re looking at Docker but need to tie it together with a bunch of existing tools
  • You’re looking to increase your velocity by implementing Continuous Delivery & Continuous Testing
  • You need to gather and surface critical stats for your applications
  • You want to make sure you’re auditing for security at the earliest possible stage
  • You want to run your long-running integration tests early and often

 Check out the entire flow in this short 3 minutes video we included in our submission:

We’re already looking forward to the DockerCon Hackathon next year.  It will be interesting to see what the rapidly changing Docker landscape looks like by then!


How to integrate Docker as part of your CD Pipeline?

DockerLogoContainer technology like Docker promises to provide version-able, environment-independent application services in a snap. However, the tasks and tools involved in creating, validating, promoting and delivering Docker containers into production environments are many, complex and time-consuming.

To learn more on how to successfully incorporate Docker as part of your end-to-end Continuous Delivery pipeline, I invite you to join my colleague Nikhil Vaze and myself on an upcoming webinar, when we’ll be discussing:

  • How you can tie together all of your existing tools to repeatedly deploy high quality applications using Docker
  • Common use cases and patterns for incorporating Docker in your software delivery pipeline
  • How you can eliminate confusion and ensure auditability by centrally managing multiple containers across environments
  • How to enable tracking and and reporting on container build, test and runtime stats
  • How to accelerate lead time and feedback loops by crushing build and test times by up to 60X

Register for the webinar »

Tanay Nagjee

Tanay Nagjee

As a Software Engineer for five years, Tanay Nagjee developed several core features of a highly scalable orchestration/automation platform. A generalist at heart, he assumed various responsibilities and especially enjoyed working with customers on real world use cases. As a Solutions Architect and manager of the Solutions Engineering team, he tackles complex software delivery problems every day.
Tanay Nagjee

Share this:

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe

Subscribe via RSS
Click here to subscribe to the Electric Cloud Blog via RSS

Subscribe to Blog via Email
Enter your email address to subscribe to this blog and receive notifications of new posts by email.