As the founding partner of the DevOps Enterprise Summit with Gene Kim and IT Revolution, we’ve programmed several leadership, technical and training sessions to advance DevOps in the enterprise.
Electric Cloud’s conference talks and training sessions are focused on the conference themes of: Getting Business Buy-In; Security & Compliance; Ops & Next-Generation Leadership; Architecture; Technology for Technology Leaders; and Experience Reports.
Electric Cloud Customer and
Partner Presentations at DOES17
DevOps Transformation 2.0: From Ancestry.com to AdvancedMD – applying strategies for leading DevOps innovation
There are times when we get a second chance at doing something big again. This is a report of lessons learned leading a devops transformation for a second time.
Having managed IT and DevOps at Ancestry.com, I now have the opportunity to build on these roots (pun intended!) leading a large-scale DevOps transformation at AdvancedMD –transforming the way healthcare works, with software.
I will share critical aspects of leading a DevOps transformation in the enterprise, tailored to the specific nature and workings of the organization. Will the transformation be big or small? linear or non-linear? require a big leap or will be adopted organically? I will discuss key frameworks for analyzing your situation, and proven patterns for scaling DevOps in the organization and overcoming some common challenges we all face along the journey.
From Mainframe to Microservices: How Somos Keeps Telcos and Us All Connected
Somos is a neutral and trusted agent with the responsibility of managing and administering over 41 million Toll-Free Numbers for a whole industry both in the US and Canada. For over 30 years, this done on a mainframe, written in PL1 running against an IMS Database. It was decided to move off of this platform due to cost and the difficulty of adding new features.
In early 2015, we embarked on a path to modernize our mainframe application and migrate to a distributed, microservices architecture. One of many challenges was how to modernize and build a DevOps culture and environment for Somos, moving to a microservices applications, while still running the legacy mainframe that “keeps the light on”. We share our challenges and patterns for supporting both containerized and mainframe workloads – evolving our processes, tooling and our mindset to support modern delivery practices to accelerate our releases and scale DevOps in the company.
Betting on DevOps: How NetEnt Transforms Online Gaming Delivery
Aloisio Rocha, Agile Product Owner, NetEnt
NetEnt (NASDAQ OMX Net-B) is a pioneer in online gaming, entertaining the world for the past 20 years. Our innovative platform provides premium online and mobile gaming solutions to some of the world’s largest online casino operators in over 40 countries, handling more than 35 billion gaming transactions per year.
Our Dev organization started on a journey of Continuous Delivery, creating strains on our Ops teams, who were having difficulties keeping up with the pace of updates produced. As the number of games and operators in our portfolio grew, we had to rethink our delivery pipeline to allow us to accelerate our releases as well as scale our operations to support the increased load and complexity of our backend apps as well.
Not to be outdone by Dev, our Ops teams decided to race them to the middle! (we’re big into gaming, here). We announced “Project TTM” – taking an “Ops-first” approach to our DevOps transformation, with an aggressive goal of cutting deployment time of new apps over the whole customer base from 6 months to 2 weeks, as well as accelerating and streamlining the onboarding of new customers – within less than a year. And we did even better than we expected!
In this talk, I would share the 4 key tenets and critical paths for our “Ops-first” approach to DevOps, and some of the patterns we used to scale our adoption so quickly – through QA, deployments, App management, and more.
At NetEnt we show that when you bet on DevOps, Red (Dev) and Black (Ops) both win, and – of course – the house! :)
Intel’s Journey to Build Quality In: How QA and Test Automation Drive DevOps
Software quality matters. Building quality in and shifting-left testing are key principles of DevOps and Continuous Delivery. And yet, Testing and QA often end up as the “middle child” of your DevOps journey: the stage in the pipeline that’s too often put on the back-burner, until you realize you have a serious bottleneck, or when things break…
Test automation is one of the most difficult hurdles – and he Achilles heel – for large enterprises looking to accelerate their delivery. This problem is aggravated particularly when needing to support legacy code, complex matrix of targets/supported platforms, or testing of embedded chips or devices that cannot be as easily updated. Too often, we see testing being handled manually – introducing risk, delays, re-work and unpredictable processes. Test engineers commonly scramble to navigate between the pace of Dev, and the requirements of Ops around environments, compliance, security, and more.
In this talk, we’ll share Intel’s journey to systematically build quality in, treating testing and QA not just as an integral part of the pipeline, but as the key driver – and the poster child – of our DevOps transformation. Learn how Intel builds quality into the delivery pipeline, the patterns we’ve adopted to simplify the complex testing matrix and scale test automation, the processes we have in place to ensure test coverage, security, detect errors quickly and optimize for quality.
In addition, we’ll share how our detailed Quality dashboards and testing data became the key indicators – for both technical teams and the executives – to gauge our release-readiness, expected quality and DevOps maturity.
Mastering the Three S’s for a Successful Pipeline-as-a-Service Strategy: Standardization, Self-service, Scale
Urban Science is essentially a global big data company specializing in performance optimization for the automotive, health and retail industries. We’ve been on a CD journey for the past 4 years – resulting in exponential improvements to our software delivery: +8,500 increase in release cadence and number of deployments, and more than 50,000 manual work hours avoided- with engineers, developers and account teams put to better use.
We went through a strategic initiative to implement DevOps and CD as a cross-organizational, centralized Platform – providing teams with a self-service Pipeline Catalog. This allowed them to easily choose the appropriate, vetted, automation for their needs and automatically trigger the pipeline and deploy their app with a click of a button.
I’ll share my learnings from overcoming the most common challenges to a “platform” approach to DevOps implementation. We’ll review patterns for successfully implementing and rolling out this strategy, addressing the three S’s required for success: Standardization, Self-service, and scale. We’ll also discuss tips for how to strike a balance between the need to enable and empower developers (and being flexible to support “snowflakes” configurations), and the need for standardization and providing on-demand, reusable pipelines – in order to scale DevOps so that the entire organization reaps the benefits.
Navigating the Software Delivery Minefield: DevOps and the Art of Release Engineering
Releasing software today is trivially easy, right? The DevOps landscape is covered with tools to address every aspect of software delivery, from committing code (in all those exciting new languages!) to deployment (on every environment from mainframes to a serverless clouds) to monitoring performance and security when it’s off to production! And our silo-free, completely cross-functional teams are now all aligned behind the ethos of collaboration and customer value as the ultimate measure of success, right?
Or… something like that. This is, of course, the quintessential DevOps story we’re told, but for most of us, our software release reality probably doesn’t quite match the tale. In this talk, we’ll cover some industry examples of various hurdles real-world organizations struggle with while delivering software in a DevOps world and specific solutions forged from the _one_ discipline still seldom discussed in many DevOps deployments: good, ol’ fashioned build and release engineering.
Microservices for the Enterprise: Myths vs. Reality
Marc Hornbeek, Principal Consultant – DevOps, Trace3
Microservices offer compelling benefits for accelerating agility, flexibility, and quality. The concept of building and releasing applications based on contained, bounded architectural components is as attractive and logical as the idea of a Lego brick that can be used to build complex systems without regard to the purpose of the ultimate system itself.
While we “dream small to go big”, the reality of microservices for enterprise use cases is not as simple. While we’re told microservices are not supposed to be dependent on each other, but often real-world services behave a bit differently… We’re told they should be easier to manage – but are they really?
Decomposing monolithic legacy applications to microservices, re-architecting components of your application to be services-based, and maintaining business continuity of services comprised of multiple micro-components — all come with unique challenges. From unavoidable dependencies no one tells you about, testing challenges, monitoring, and tooling – this talk covers some of the myths we often hear about microservices, and how these play out IRL for enterprise applications. We share some “gotchas” to keep in mind and proven patterns to help organizations get the most of moving to microservices – without risking availability, manageability or quality of service.
Starting and Scaling DevOps in the Enterprise
Gary Gruver, Founder & CEO, Gary Gruver Consulting
More and more large companies worldwide are excited about DevOps and the many potential benefits of embarking on a DevOps transformation. The challenge many of them are having, however, is figuring out where to begin and how to scale DevOps practices over time in large enterprises.
This presentation focuses on how to analyze your deployment pipeline to target your first improvements on the largest inefficiencies in software development and deployment. It also explores the different approaches necessary for deployment pipelines coordinating the work of small teams versus what is required for coordinating work across very large and complex organizations with many teams.Back to top
Electric Cloud Panels at DOES17
DevOps In The Enterprise: The Analyst Outside-In Perspective
Hosted by Sam Fell, VP of Marketing, Electric Cloud
Sam will be moderating this analyst panel featuring Robert E. Stroud, Principal Analyst, Forrester Research and Torsten Volk, Managing Research Director for Hybrid Cloud, the Software Defined Data Center, Machine Learning and Cognitive Computing at EMA Research.
DevSecOps: It’s Not Me or You, It’s WE!
Hosted By Alan Shimel, Editor-in-Chief, DevOps.com
Alan will be moderating this panel featuring Anders Wallgren, CTO, Electric Cloud; Robert E. Stroud, Principal Analyst, Forrester Research; John Willis, Vice President of DevOps and Digital Practices, SJ Technologies; Caroline Wong, VP, Cobalt; Shannon Lietz, DevSecOps Leader and Director, Intuit, and Paula Thrasher, Director, Digital Services, CSRA.Back to top
Electric Cloud DevOps Experts at DOES17
Architecting Your App and Your Pipeline for Continuous Delivery – 10 DO’s for Successful DevOps
Anders Wallgren, CTO
Software and pipeline architecture matters — or “Hope triumphs over experience every time”..
A recurrent theme in the software industry is the hype around each new technology that comes down the pike. The latest ‘shiny new thing’ will finally get us to the point where we can have Jetson’s flying cars and live-in robot maids.
Surely, my virtualized, containerized, cloud-nativized, artificially-intelligent, big-data, machine-learned, application will practically write itself!
The truth is the architecture of both your application and your delivery pipeline itself matters A LOT. Often more than the technologies you choose to use.
To be sure, some new technologies have massive beneficial impact, but we often forget that it’s garbage-in, garbage-out.
This talk will explore how solid architecture of your application and the design of your pipeline allow new technologies to be used to maximum effect.
We’ll review 10 DO’s and proven patterns for architecting your app and your pipeline to enable effective DevOps and achieve Continuous Delivery.
- Some new (and old) technologies and how they do/don’t help
- Best practices for loosely coupled architectures & teams and their role
- Best practices for application architecture to allow for scale and easier updates – for both monolithic applications and microservices.
- How to ensure your pipeline and app architecture support HA, DR and business continuity
- Tips for scaling delivery pipelines across the organization and supporting self-service, automated, vetted automation
- Tips for incorporating security and compliance checks as part of your pipelines
- How to future-proof your design
Surviving the “Script-apocalypse”
Avan Mathur, Product Manager, ElectricFlow
- Are your teams busy daisy-chaining spaghetti scripts?
- Copy/pasting configuration code from one giant monitor to another?
Are some of these scripts so long (and the person who’s written them long left) that you don’t even know what they do – but are too scared to not go through the motions?
Are you then staring at the screen, waiting for the cryptic script to finish running, hoping it doesn’t conflict with the recent upgrade you had to your environment?
Are you then opening your Runbook at page #72, to find the next script you need to copy – on your way to releasing this new update?
And so on…
Scripts are not automation. But scripts are pretty much unavoidable in DevOps. From CI, provisioning VMs, test automation, deployment, monitoring – scripts are everywhere.
Large organizations often experience a “Script-apocalypse”, where teams are buried in – and spend a large portion of their time attending to – sprawled, spaghetti, snowflake, nested, ancient scripts. That seem to have a life of their own…
This problem is aggravated not just by the sheer scale of teams/releases that enterprises need to support, but because many of those involve legacy applications and legacy IT processes. This makes DevOps scripts’ sprawl a key bottleneck to getting to predictable IT processes and accelerating your delivery pipeline. While we don’t want to throw the baby out with the bath water, maturing your use of DevOps scripts is critical to streamlining your processes and scaling DevOps throughout the organization.
This talk covers best practices and emerging patterns – covering DevOps automation, Pipeline design and Organizational design approaches – gathered from large enterprises that have managed to climb out of scripting hell. Come learn tips and hard-won lessons for surviving your own “script-apocalypse”!
Releasing Product is THE Killer Feature!
Wesley Pullen, Chief DevOps Strategist
Regardless of how hard your teams work on valuable software updates, no value is delivered until it is running safely in production. Thinking about it that way, your delivery pipeline itself is THE killer feature, and successful DevOps adoption is the best way to drive your business!
The maturity, speed and quality of your delivery processes have become a critical competitive advantage. IT leaders are tasked with enabling their teams to “live and breathe” the pipeline and its impact on the business – to have all stakeholders converge on this shared “path to production”, and collaborate to continue to enhance and accelerate their DevOps processes to further the business.
In this talk we’ll share key patterns used by successful organizations to enable, measure and scale DevOps success in the enterprise by creating a delivery pipeline that transcends technology – and translates to great culture, employee satisfaction, and continuous momentum for the business.
- Data-Driven DevOps: 4 Use Cases and Practical Applications
Ken McKnight, Solutions Architect
- How Financial Services are Leveraging Legacy IT Investments When Transitioning to DevOps
Manuel Schuller, EMEA Technical Director
- BizDevOps: Using KPIs to Unlock a Common Language
Mark Sutton, Director, DevOps Solutions
- Baking Security into your Release Pipeline: Start Here
Shozab Naqvi, Solutions Architect
- Best Practices for DevOps-Ready Infrastructure Management and Automation
Angelo Lynn, Solution Engineer
- Process-as-Code: Real-World Examples that Scale
Marco Morales, Senior Solutions Architect
- Best Practices for Model-Driven Approach to Application Release
Chris Doucet, Solutions Engineer
- Managing microservices delivery at scale – 5 key challenges
Anand Ahire, General Manager, DevOps Release Automation
- Drive faster feedback and higher quality with containerized test acceleration
Mohan Dattatreya, General Manager Acceleration Solutions
Electric Cloud Training Sessions at DOES17
- ElectricFlow Product Training – Held at the Conference venue, the Hilton San Francisco Union Square,on Saturday, November 11th and Sunday November 12th. This introductory course is targeted to anyone who will be using or implementing ElectricFlow. The training provides a high level overview of the ElectricFlow platform and focuses on the most commonly used Application Release Automation capabilities including Applications, Pipelines, and Releases. Hands-on labs are incorporated throughout the training so students can gain experience working with the solution.
- Executive Workshop Led by Gary Gruver: Leading the Transformation “Applying Agile and DevOps Principles at Scale” – This half-day training is held the day following the conference, on Thursday, November 16th, at the Hilton San Francisco. The workshop will focus on the changes that Executives are uniquely positioned to address and will have the biggest impact on the business. Instead of the typical transformation that focuses on improving the effectiveness of individual teams this approach focuses the Executives on engaging their management teams to improve coordination across teams with DevOps approaches. Furthermore, the workshop will provide an in-depth review of DevOps, the different practices, and the different inefficiencies they were designed to address. It will highlight how and why DevOps principles can and should be different for coordinating the work across large and small teams.