What would you do with 200 compute cores spread across 57 hosts?

As a Product Manager, customer stories and end-user anecdotes are the best input to appreciate the work you are doing and the value you provide to the world. It’s also great for understanding where the market is heading and what unsolved problems exists out there that needs to be addressed and solved next.
To that point, I try to meet and talk to our ElectricAccelerator customers and end-users as often as possible. I recently met with a customer of ours in the telecom/networking equipment industry, who gave me some interesting insights and perspective into how they were using their ElectricAccelerator-powered build cloud to achieve dramatic acceleration and developer productivity, and by that solve some of their main internal organizational problems around efficiency and adoption of agile principles.

Crossing out problems and writing solutions on a blackboard.

This single centralized customer deployment manages roughly 500 cores of compute infrastructure in their build cloud, serving on the order of 2500-3500 software builds per day. Interestingly due to the nature of their build and the performance they get out of ElectricAccelerator, they have configured the environment to only allow 4 concurrent builds at any given time (with all other possible instantiated builds being queued waiting for available compute resources). The reason for this is due to simple mathematical queuing theory when optimizing for latency rather than throughput – making sure that the average runtime for any single build is minimized.

This customer has an “embarrasingly parallel and scalable” build structure. It is so parallel that the concept of properly visualizing it becomes a challenge all by itself and I admit, from a visualization standpoint the current version of ElectricInsight does not do the build justice. Above, the live Blinkenlight build visualization shows a conceptual visualization how ElectricAccelerator are able to parallelize and schedule the workload across a large number of cores and distributed hosts – in the case of this customer 200+ cores across 57 distributed build hosts.

By doing so, the below graph shows how they are able to reduce the runtime of the build from a serial single-core ~1h55m47s down to ~1m40s – 64x acceleration or a reduction by 98.5!
It’s worth noting that prior to adoption of ElectricAccelerator, this software development organization were suffering from hour-long builds for developers and integrations throughout their development lifecycle.


How is this at all possible? What’s the nature of this particular build allowing such massive scalability, distribution and parallelization? If we’re looking at the below “Job time by type” report along with some other analysis data from the build environment, a few things stand out:

  • ~93 or a significant portion of the total time spent in the build is compilation workload – 9264 compile steps with an average runtime of 2.2s, by its nature all very parallel.
  • Low (but not insignificant) parse time overhead in the build – 2.25 or ~156s.
  • The produced output from the build is fairly small, only ~0.5 GB of output data that needs to be written back to disk.


So from an ElectricAccelerator Product Management perspective, what was the learnings and takeaways from this customer interaction and analysis of customer data?

  • A clear reinforcement that Speed and Performance matters, and the goalpost for what’s good enough keeps moving – if there are ways to get build times below 2 minutes, organizations will work to make that happen, no matter the starting point!
  • Even for a build this fast, there are additional features and capabilities that could be enhanced or implemented in ElectricAccelerator to achieve even more performance. Stay tuned for the upcoming release of ElectricAccelerator 7.0 where lot’s of additional performance enhancing features are being added!


David Rosen

David Rosen is a Solutions Engineer turned Product Manager turned Ecosystem Solutions Manager at Electric Cloud, currently focused on technical and strategic ISV partnerships in the Developer Productivity Tools, Continuous Delivery and DevOps space. With 12+ years of experience from the Enterprise Developer Tools space, David brings a wealth of hands-on technical experience and knowledge how software is developed and delivered at scale, across various industries, technical domains and geographical regions. David has held managing and engineering positions at UIQ Technology, Nokia and Telelogic. David holds a MS degree in Information Technology from Uppsala University, Sweden.

Share this:

Leave a Reply

Your email address will not be published. Required fields are marked *


Subscribe via RSS
Click here to subscribe to the Electric Cloud Blog via RSS

Subscribe to Blog via Email
Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Continuous Delivery (#c9d9) Podcast

c9d9 Continuous Discussion on Agile, DevOps, and Continous Delivery

Next episode:

Episode 93:
2018 Year In Review

December 11, 10am PT

By continuing to browse or by dismissing this alert you agree to the storing of first- and third-party cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. See privacy policy.