I recently came back from Portland, OR, where I got to speak at HashiConf. I had a great time at the conference, and also got to tour a little bit of Portland and taste the famous Voodoo donuts:
My presentation at Hashiconf was on how we at Electric Cloud have used some of HashiCorp’s tools – orchestrated by our own ElectricFlow – to create and share a reusable demo library across the organization from dev, qa and even sales. The overall pattern I spoke about was to find the business problem, determine the requirements and then find a solution that works. For each of the above tools we used, I explained how they addressed the business problem we had at the time, how we got buy-in from various stakeholders in the organization, and how we configured these tools as part of the overall solution.
HashiConf would publish the video recording of my talk soon (for now, help yourself to the slides on Speaker Deck – embedded below). In this post I would explain a bit how we used Terraform for infrastructure as code (a similar outcome could have been achieved using Chef or Puppet.)
Using Terraform to Bootstrap a Wildfly Cluster:
A Wildfly cluster in Domain mode requires some configuration between the Domain and Host controllers. Our model is described in the picture below- where we deploy to the Domain controller, and users are served off the different hosts.
Terraform allowed us to write a definition of what we desired, check it into source control and share it with all interested parties. (Check out our code on GitHub: https://gist.github.com/nikhilv/74a107af7e44866e3f14)
After coding the Terraform files, we can see a visual representation of the execution plan.
After using Terraform, we appreciated the ability to reproduce the infrastructure, that allowed for fast experimentation while still preserving the building blocks that could be used in the future for other demos.
In order to keep a record of the changes that Terraform makes, an Operator should not use Terraform directly to control Production. Instead, it needs to be executed within a tool that can keep track of history and allow users to accept or reject the proposed changes to infrastructure.
Once Terraform had created the infrastructure, we used ElectricFlow to deploy applications to the Wildfly cluster. One of the statements I made during the presentation that got a lot of responses was to be mindful of the way you use the UI to construct your process – since this work, usually, can not be re-used (with the exception of Cloning). For example, Terraform allows you to define as code a lot of the AWS configuration that you’d normally input via the AWS UI. This makes your configuration documented, checked-in to Source Control, and repeatable. Similarly, for the same benefits, we were using our ElectricFlow DSL for defining our Process as Code – so that our deployment and application processes are also in Source Control and are versioned and reusable. Also, it allows us to clone/scale more quickly (since we only need to copy the code, versus using the UI).
Thank you HashiConf for inviting me to speak, and thanks for the great conference and for this AWESOME jacket!
Latest posts by Nikhil Vaze (see all)
- OpenStack for the Enterprise: What CIOs Need to Know - December 2, 2015
- OpenStack Tokyo, Docker, and Moving from Monolith to Microservices - November 11, 2015
- Continuous Discussions (#c9d9): Episode 28 Recap – Docker and Containers in Your CD Pipeline - November 4, 2015