There are a lot of general theories and principles available on the concept of Continuous Delivery – and how an ideal product development organization should strive to continuously deliver release-ready product to end-users on every change of software, hardware, configuration or data. There is also an abundance of available material and practical recommendations on how to make Continuous Delivery work in the world of hosted cloud- and web-based product development, i.e. when you as a product development organization typically own and manage the end-to-end chain from development to operations.
Every product development environment is complex in its own right. In this post I aim to explain and discuss some challenges and hurdles with the application of Continuous Delivery theories and principles to embedded product development. The embedded and intelligent systems markets are huge, representing opportunities in a trillion dollar market for the organizations that succeed. For the sake of this discussion it includes but is not limited to manufacturers and suppliers in the following industries: automotive, aerospace and defense, medical devices, mobile and consumer electronics, networking and telecom infrastructure, semiconductor, and energy.
If you are exploring a Continuous Delivery implementation for your embedded product development organization, you will find it difficult at the time of this writing to find a lot of relevant available practical reference material – especially if you’re looking at this from an enterprise-scale perspective. Let me point out one great reference, well worth the investment to read, learn and take inspiration from: “A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware”. This book will discuss the transforming journey of the embedded printer development team at HP, starting from what I would say a very common general base in terms of what problems and challenges they were facing. The authors present some really interesting thinking and practical solutions in the book, backing up the results by very impressive before/after statistics and metrics.
It’s important to note the complexity of realizing Continuous Delivery; it will not be easy and will require a lot of hard work – regardless of your industry, working environment within your organization, and baseline starting point. You need to recognize and understand that a successful Continuous Delivery project and implementation is a long-term, complex and ambitious transformation of your organization. It is not a turnkey solution or tool, and involves all dimensions of your R&D organization. To succeed, you need a solid platform consisting of tooling and infrastructure, process and configuration management, and finally people and change management. Learning from others that have walked the same path and are willing to share their experiences will greatly help you avoid a number of common mistakes.
While the absolute majority of the general theories and concepts defining Continuous Delivery are very much valid also for embedded product development organizations, the typical technical environments are vastly different from what’s commonly being referenced. I will cover details of each aspect later in this post individually – but legacy, infrastructure, lead times, and compliance are all very common challenges (in many cases intertwined with each other) that need to be addressed if you are to succeed with your Continuous Delivery implementation. Finally it is also important to understand that the end goal of a Continuous Delivery implementation for an embedded product development organization typically is very different than what it would be for a product development organization based on web or cloud based technologies.
The typical norm for embedded product developers is that there are large if not enormous amounts of existing IP and technology that new and future products depend on. Across the hundreds of different embedded development teams I’ve interacted with in the past 10 years of working in this industry, I can only think of a couple embedded development teams with the luxury of having started from a clean sheet with their design, implementation and organization.
The practical consequences of this legacy are multi-fold and very complex – product architectures, massive codebases, team organizations, build systems, test environments… As an example, one embedded product development environment that I am actively working with are currently managing a growing legacy codebase of 130 MLOC that’s been around for two decades, with no signs of stabilizing and slowing down in terms of growth – the codebase has in fact grown by almost 50 in the last two years!
Retro-fitting this legacy into a Continuous Delivery model is not easy and is likely to be expensive, but is almost guaranteed to be a worthwhile effort, especially if you have a longer-term vision and intend for your product to stay competitive in the future.
Enabling the architecture of your legacy embedded system to fit in a model of Continuous Delivery is challenging and often very cumbersome – as the product architecture is the natural guiding principle for most product development teams in how to organize themselves and their work. More often than not, multiple layers of platform, framework and application components are deeply nested with each other causing complex monolithic codebases to deal with. Rarely do I find that these components and parts of the system can be individually handled in a way that allows for separate delivery and release streams – which is almost a necessity for a successful and efficient realization of Continuous Delivery.
I have no hard data to back up the following claims but if there ever would be such rankings, I’d say that embedded product developers are likely to be on top of both the “compute core per capita” as well as the “test environment cost per capita” lists. So compared to other product developers, it’s fair to say that embedded developers stand out in a couple of ways with respect to their development infrastructure needs.
- Satisfying embedded developer’s insatiable hunger and need for compute infrastructure.
Building an embedded device is a complex project involving both hardware and software components, supported by massive amounts of compute infrastructure. Whether or not you’re involved in integrating all the various components of your system into an image that will run on the device or if you are responsible for actual development of some specific functionality, you almost certainly have an insatiable need for more resources.More concretely, most embedded software development is implemented using native C/C++ programming languages, prone for their long and CPU-intensive build process. The most obvious and common solution to optimize and accelerate build times today is to throw lots of hardware at the problem, allowing for parallelism of the build process across available cores on the developer or build server. As an interesting reference example, the Android platform build requires 48+ cpu cores on a single machine to maximize performance. One way of satisfying these needs for loads of compute infrastructure is to buy and deploy large amounts of standard large off-the-shelf servers – but with the emerging growth of code that needs to be built and managed for any embedded product, you are setting yourself up for a costly and never-ending race against Dr. Gordon Moore!
(Other native programming language paradigms are emerging with promises to overcome some challenges with respect to build times – but it will take many years if not decades for any of these languages to reach mainstream popularity in the embedded software industry, if it ever happens.)
As your Continuous Delivery implementation scales and cycle times need to be shortened, your development teams will demand even more computational power to properly serve the increased load of software builds, tests and analysis jobs. These days, supplying the necessary compute power for some of these workloads while preserving economies of scale is a complex but fairly well understood problem – with centralized development clouds and dedicated backend high-performance compute infrastructure being common ways to satisfy your needs for large-scale efficient software builds, analysis and emulator processing.
- Managing automation of physical target-based testing.
Another major difference for embedded developers is the problem of how to efficiently integrate and manage automation of physical target-based testing. This need for proper and automated testing on the actual embedded hardware is imminent and something I don’t expect to ever go away – as I have yet to hear of any embedded product development team being compliant to release product without testing on the real physical embedded hardware. And if you are reliant on manual configuration and deployment of your physical targets, it’s unreasonable to expect an efficient and always available Continuous Delivery environment.These physical targets are also typically custom hardware, very expensive and quite often in some prototype-mode, so prone to be fragile. Given their cost and maturity, I have never heard of a product development team with an abundance of these targets, so it is of utmost importance to maximize utilization of the ones in possession. Possible solutions and alternatives exist to avoid being so dependent on the actual physical targets, such as sophisticated full-system simulators that can run unchanged production binaries in managed simulated environments.The final aspect of automating physical target-based testing in your Continuous Delivery implementation is the actual technical integration, and how to properly interact with and orchestrate the System-Under-Test (SUT). The details of this topic is very specific to the target in question and deserves its own technical blog post or paper in its own right, and is out of scope for this discussion.
3. Lead times
Long lead times are detrimental for the productivity of any product development team, and making sure the end-to-end cycle time of the build-test-release workload is as short as possible should be a key priority for anyone implementing and scaling a Continuous Delivery environment.
As a concrete example of where the embedded market is today in terms of management of the lead times, the Yocto project is a ground-breaking thriving and active community focused around providing a common framework for managing, creating and building custom embedded Linux devices. In my discussions with embedded developers currently using the Yocto project, performance improvements stands out as the primary request or need.
As previously mentioned, most embedded developers are currently relying on C/C++ for their software development environment. This has significant consequences with respect to lead times. If you compare typical baseline build and analysis lead times for this native C/C++ programming paradigm vs. managed environments such as Java and .NET, there is a magnitude of difference which needs to be addressed in a successful Continuous Delivery implementation.
Fortunately there exist mature and sophisticated solutions for build, test and analysis acceleration that can reduce lead times by up to 90-95, which could mean bringing hours of runtime down to a few minutes if not seconds. If your current build, test and analysis process are any close to being longer than what it take your developers to refill their cup of coffee, my recommendation would be to prioritize this as a key improvement to address. Accelerated builds and tests will pay dividends not only for your Continuous Delivery implementation but also for your developers in their day-to-day edit-test-compile cycles.
Many embedded developers in e.g. the automotive, aerospace, defense and medical device industries needs to meet rigorous compliance, security, safety and auditing standards in order to ship products to market – some example standards being MISRA, DO-178B/C, ISO 26262 and IEC 62304. Verifying for these regulatory requirements is a complex, costly and time-consuming task which obviously has negative implications for anyone trying to implement an efficient Continuous Delivery solution.
Fortunately there exist integrated automation solutions to reliably and securely manage policies and compliance for auditing purposes, as well as acceleration mechanisms that will help you run your comprehensive security analysis and testing faster and more often.
5. The End Goal
When you hear of Continuous Delivery implementations at companies such as Facebook, Netflix, Etsy, Gap and FamilySearch it is important to understand that all of these companies serve their customers and users through hosted web and cloud solutions, where the companies themselves own and are responsible for the end-to-end development-to-production infrastructure. In this delivery model, it makes total sense to strive towards an incremental release and customer shipment of every product change.
The typical end goal of Continuous Delivery in the context of embedded product development is somewhat different, in that you most often won’t own and have any control of the final destination and end-user target environment. But don’t let the fact you aren’t walking that extra mile to deliver incremental value to your end-users on every product change move the goal-post for your embedded Continuous Delivery implementation. In this context I’d like to think of the goal for Continuous Delivery as the constant or instant availability of a “shippable” compliant functional product, ready to be delivered to the market at any time at the push of a button.
Regarding lack of control and ownership of the end-user embedded target environments, technology are changing the game here as well.
As an example are various Over-The-Air (OTA) mechanisms that exist today to automatically deliver upgrades to embedded devices and are being used for e.g. mobile phones, settop boxes – even in cars like e.g. the new Tesla Model S! Due to the uncontrolled disruption in end-users usage and behavior, OTA-based product upgrades are being rolled out with low frequency – and cannot be compared to how a modern website of today is constantly being upgraded on every change.
But with software becoming more and more important as the differentiating value proposition, and various forms of sophisticated wireless technology are becoming more and more trusted as a secure and reliable barrier of data, I expect OTA-based upgrade mechanisms to continue evolve and mature in the near-term future, broadening in use and applicability to most if not all embedded product industries. This will have an interesting effect for future embedded product development organizations opening up the possibility for end-to-end Continuous Delivery, potentially leading to every change causing an upgrade in the end-user target environment.
Based on my experience, this post discussed some of the most outstanding and interesting differences and challenges with implementing Continuous Delivery in the context of embedded product development. Again, it’s important to recognize and understand that a successful Continuous Delivery project and implementation is a long-term, complex and ambitious transformation of your organization. It is not a turnkey solution or tool, and involves all dimensions of your R&D organization. To succeed, you need a solid platform consisting of tooling and infrastructure, process and configuration management, and finally people and change management
Did I miss anything? Do you not agree? Or does it makes total sense? Please let me know directly or post a comment below!
Latest posts by David Rosen (see all)
- Delivering the “Smarts” to the Smart Car – Watch the Webinar Replay - September 25, 2014
- IoT and Industry 4.0 are Driving Multi-Domain Continuous Delivery - September 8, 2014
- VMworld 2014: The Enterprise DevOps Cloud – Unlocking the Cloud for Dev, Test and Ops - August 26, 2014