In a recent episode of the Continuous Discussions (#c9d9) podcast, a panel of industry experts talked about Continuous Delivery and Deployment Pipelines and how abstraction and modeling those pipelines can help spur innovation and creativity.
The panel included: Helen Beal, DevOpsologist at Ranger4 and DevOps editor at InfoQ; Matthew Skelton, head of consulting at Conflux; and Electric Cloud’s Sam Fell and Anders Wallgren.
Continue reading for some of their top takeaways!
The Key Takeaways
Wallgren discussed the importance of the pipeline for software: “It becomes a very important feature of the product because ultimately you don’t ship if you don’t have one. Having the ability to make it adaptable, to make it grow along with you and change as you change your tooling and your architecture are key. Otherwise, you’ll find yourself bound into particular architecture just because of your pipeline.”
Beal talked about the importance of value stream mapping. She believes it is a powerful tool to get started with any kind of DevOps evolution: “It usually gets a bunch of people in the room that hardly ever get in the same room together and they start looking at things and understanding things in a way that they just haven’t understood before.”
There are some important benefits that have resulted from the evolution of the pipeline and automation, but it requires a new mindset according to Skelton: “Getting fast feedback from being able to automate and fix problems really quickly. That kind of mindset is very different from tinkering away with code until it might be ready and perhaps it will work out and come together…We’re proving ourselves all the time, and it’s a much more scientific way of doing it. Let’s try and falsify this hypothesis by running a lot of tests against it and we’ll get the results. And if it fails, it’s not a problem.”
Fell emphasized the dependability of an automated pipeline because of its repeatability in agreement with Skelton: “If you put the exact same things in you’ll get the exact same stuff out. Once you get to that point anything that you put in that’s different becomes a hypothesis that you’re testing, and whether it’s a failure or a success – it’s still a success as long as you’re able to see it.”
Beal touched on the continuous learning piece of the Three Ways of DevOps: “It’s quite telling when we go into an organization to do some discovery and ask people how they feel about failure. You can immediately tell a lot about how forward-thinking that organization is. And it’s quite fun to have conversations with them about how to embrace failure a bit more.”
Wallgren relates the fear of failure to the education system: “I’m wondering how much of this fear of failure is part of the educational model. You only take the test once, you only turn in the paper once, you get to one grade and then you’re done – you move on to the next thing. In industry, that’s not how it works, you ship 1.0, you ship 1.01, and you build on failure and hopefully learn from it and get better.”
Skelton also weighed in on the elements that he believes encompass the Three Ways: “It means that deployment pipelines are absolutely fundamental, telemetry is absolutely fundamental and making both of those things happen across all the environments that we want to have in play is absolutely essential. For me that’s what gives us the Three Ways of DevOps.”
Fell offered his insight on DevOps tools: “Folks are out there now using these different tools to get different outcomes. Let’s focus on what the outcomes need to be and then as an enterprise, let’s focus on what’s the best way for us to get those outcomes. Is it to have 50,000 different products that help us with that, or is it to have one product, one enterprise license?”
Wallgren sees value in self-service models: “Service models are becoming more prevalent in terms of if I need an environment to do some testing, should I have to go through the architecture committee again? Should I have to go through SecOps again, even though it’s the same as last time? Providing all of the pipeline components that blessed and well-known in a self-service delivery mechanism is very powerful.”
Beal talked about common gaps in the pipeline with security: “There’s an exercise we do in our DevOps Foundation course where we get people to architect their current pipeline and then we pick holes in it and look for gaps in it. And security is a classic one, a lot people aren’t doing security very well early on in the pipeline so those things will get that kind of software hygiene.”
Skelton added to Wallgren’s thoughts on self-service models: “I think that there are a couple of things that can be real enablers. One, is this self-service pipelines-as-a-service. That needs some greater maturity in some of the platform teams. We actually need to really treat this platform as a product, and need to understand our users, our customers, our development teams, testers, BAs, software developers.”
Adding more from the tools perspective, Fell believes you can think about existing scripts like a tool: “If you can abstract that script out, you can use the same script that you have. But if you’re having it part of an orchestrated pipeline and you’re feeding it parameters then you get a lot more consistency. Once you’ve got it as part of your pipeline, as a service, or part of your product, you can start refactoring it.”
Watch the full episode:
Latest posts by Electric Cloud (see all)
- Webinar Recap: Beyond The Script Apocalypse – Model Driven Pipelines and Testing - September 20, 2018
- Announcing Electric Cloud University - September 18, 2018
- Jenkins User? See what you’ve been missing! - September 17, 2018