We’ve recently welcomed two new additions to our Advisory Board – with Nicole Forsgren and John Willis, joining Gene Kim and Gary Gruver as Electric Cloud’s strategic advisors.
As we set to work with each of the advisors, we also took the opportunity to pick their brains about where DevOps is heading, what are the key things we should know as we set out on this journey, and what are some of the emerging technologies and patterns they have their eye on. We’re excited to share the tips and insights from these DevOps luminaries in this short Q&A series – starting off with Dr. Nicole Forsgren!
DevOps Q&A – with Nicole Forsgren:
Dr. Nicole Forsgren is an IT impacts expert who is best known for her work with tech professionals and as the lead investigator on the largest DevOps studies to date. She is a consultant, expert, and researcher in knowledge management, IT adoption and impacts, and DevOps. In a previous life, she was a professor, sysadmin, and hardware performance analyst. She has been awarded public and private research grants (funders include NASA and the NSF), and her work has been featured in various media outlets and several peer-reviewed journals and conferences. She holds a PhD in Management Information Systems and a Masters in Accounting.
@nicolefv | Website
Q:In your experience, what is the biggest challenge for adopting and scaling DevOps in the enterprise?
Right now, I think the biggest challenge for organizations is focusing on prioritization and doing the right things to accelerate their technology transformations. So often, companies and organizations want to take the easy way out and just “buy” their DevOps solution – which usually means buying a technology or automation tool. At the same time, the DevOps crowd sings from the rooftops that DevOps is all about culture. And then the agile and lean practitioners chime in that process is important.
And here’s the thing: in a way, everyone is right. The research I’ve conducted with DORA (Jez Humble and Gene Kim) and the team at Puppet over the past four years, which draws on 25,000+ respondents from thousands of organizations across all industries shows us that successful technology transformations need technology, process AND culture. We need all three.
Making this even more complicated is the fact that there are over twenty key capabilities that we know drive improvements in the ability to develop and deliver quality software quickly and reliably – and this software delivery performance contributes to an organization’s bottom line, as measured by profitability, productivity, and market share.
But we can’t tell our teams to work on 20 things at once. People suck at multi-tasking. In the past, and even today, companies took a scattershot approach to improvement: guessing about what they should do. In the beginning of the DevOps movement, this was good enough. But today, the best are getting better and it’s a competitive market. To accelerate your transformation in a world of limited resources, organizations need to be strategic about where they devote their resources. (And by resources I mean both time and money.) Cost of delay is a very real thing: delay of getting features to market, delay of responding to compliance and regulatory changes, delay of getting your transformation underway, and delay of accelerating your transformation.
Our research shows that the High Performers are getting better every year, so maximizing your transformation in smart, strategic ways should be high priority for any organization that leverages software and technology to deliver products or services to customers.
Q:If you could leave us with just 3 takeaways for large-scale DevOps – what do you think we simply MUST know when we set off on this journey?
- The first pattern for large-scale DevOps would be to choose the right project to start with. It needs to be big enough and meaningful enough to “count.” It should be important enough to get resources and impact real customers, and when you deliver, it should catch the attention of those whose opinion matters. It should also be small enough that it can be turned around in a relatively small amount of time (about eight weeks) with a special team – possibly a team allowed to break some rules to allow them to move fast. This project should also be small enough that if it fails – because we must be able to learn from our mistakes and failures – the business will not fail. After all, this is a grand experiment. For this reason, greenfield applications are often good candidates.
- The second tip for large-scale DevOps would be to allow teams to select their own tools and not get stuck in the standardization trap. This actually hearkens back to my some of my dissertation research, and we’re seeing more and more patterns of this emerge as we work with more customers in industry. Yes, there will be benefits from having a similar set of tools. But your teams are the experts. Trust them to make smart and wise decisions. You will run the risk of them doing things like resume building – but they, in turn, will run the risk of making their lives infinitely more difficult by having to support their entire development, test, deployment pipeline, and support work by going out on their own. If they truly believe a wholly different tech stack (or, sure, resume building) is worth that level of effort, TRUST THEM. They may back out of that experiment. Let them also learn from that failure and don’t blame them. They were making the best decision they could with the information at hand, and they are the experts. You could be surprised at the results, and your teams will be happier, more productive, and your work will scale.
- My final tip is near and dear for me: never underestimate the importance of good measurement. Embrace the importance of a real baseline, even if it is bad. Measure regularly, because this will allow you to capitalize on the good things happening and let you stop doing the things that aren’t working. (Remember this mantra: small improvements are the key to large gains.) Measure both outcomes and inputs, working in a hypothesis-driven way. Have your metrics roll-up throughout the organization, letting each team define and control their own destiny. If you don’t have any metrics right now, don’t be discouraged! Most organizations have very little or no metrics in place. It’s tech organization’s dirty little secret and we just don’t talk about it. Get started with a few metrics, keep what works, toss what doesn’t, and rotate them out when they’re no longer useful. Finally, capture metrics in groups of two or three, to avoid gaming of measures: by capturing metrics that are in tension, you’ll help capture the full picture and keep teams and management from being myopic.
Q:What emerging DevOps technologies or patterns are you most excited by now, and why?
I’m excited about the growing understanding among many leaders in measurement about different types of metrics, what they are great for, what they are not-so-good for, the importance of complementary metrics, and the role they play in signaling to organizations along their transformation journey. For the last several years, there has been an assumption that only certain types of metrics were valuable, and this has dominated the discussion, even though several of us have been very aware that system data is an incomplete view of the system and not appropriate for all levels and types of measurement. I’ll be working with a handful of people on a whitepaper to help explain what we know, as well as another type of benchmarking. Stay tuned!
Q:What IT Ops skills are most important for the future?
Not just IT Ops skills – across the board, I think critical thinking, problem solving, computational thinking, and the ability to decompose a problem are essential for anyone working in industry and business today and into the future. I’ve seen it separate the excellent from good. Add to that the ability to communicate – because it does you no good to have solved a problem if you can’t tell anyone about it.
Q:What is the most revealing DevOps stat you’ve heard recently, and why?
HA! I can’t say… My favorite recent stat is from the 2017 State of DevOps Report, which was just releases! Keep an eye out for some stats around organizational performance as well as automation. There’s some great stuff in there!
More from Nicole Forsgren:
Nicole has shared her experience with us in the past on several of our #c9d9 video podcasts. See here in action, to learn how she “rubs science on thing” (as she puts it), to helps us make software better with metrics:
- #c9d9 Episode 60: Leading Change – Tips shared at DOES (see also Part 1 and Part 2 of the recap).
- #c9d9 Episode 59: DevOps Trends, Predictions & New Year Resolutions
Latest posts by Electric Cloud (see all)
- Key Takeaways from Continuous Discussions (#c9d9) Episode 74 – DevOps for Mainframes - August 11, 2017
- Key Takeaways from Continuous Discussions (#c9d9) Episode 73 – The 2017 State of DevOps Report - August 2, 2017
- Key Takeaways from Continuous Discussions (#c9d9) Episode 71 – DevOps for Big Data Applications - July 28, 2017