The last word on SCons performance

My previous look at SCons performance compared SCons and gmake on a variety of build scenarios — full, incremental, and clean. A few people suggested that I try the tips given on the SCons ‘GoFastButton’ wiki page, which are said to significantly improve SCons performance (at the cost of some accuracy, of course). Naturally, I felt that I had to do one last follow-up exploring this avenue. And since that meant I would already be running a bunch of builds, I figured I’d try out SCons’ parallel build features too. My findings follow.

Can SCons “GoFast”?

You can read all about the setup in the previous post, so I’ll just jump straight to the results. According to the “GoFastButton” recommendations, –max-drift=1 –implicit-deps-unchanged will “run your build as fast as possible”, so that’s what I used. In all cases, I did an initial, untimed from-scratch build first, to generate the initial dependency graph, then I ran a second timed build. After that timed run, I ran a clean build and finally ran the full build again, this time with -j 2, to evaluate the impact of SCons parallel build features.

Contrary to my expectations, the GoFast settings had relatively little impact over much of the test range — only about 5-10 faster than without those flags. Only the very largest build showed any significant impact, with a 25 improvement. Unfortunately that impressive result is more likely because SCons uses less memory with GoFast settings enabled. If you recall from the previous tests, with 50,000 source files, SCons’ memory footprint was a hefty 2,023 MB — enough to force my test machine to start swapping. With the GoFast settings, SCons used “only” 1,838 MB — still an awful lot of memory, but just smaller enough to avoid thrashing the system, with the end result being a substantially improved build time.

Building in parallel had a more substantial impact — reducing build times about 30 on the largest build (compared to a serial SCons build with GoFast settings enabled). That’s not as good as I had hoped for (on a large, relatively “flat” build such as this, I expected the build to parallelize very well), but it’s not terrible. Here are the complete results:

SCons full build performance, click for full size

So, GoFast seems to be a bust on full builds. It’s definitely better than vanilla SCons, but still nowhere near as fast as gmake.

Things look a little better on “one-touch” incremental builds though, where GoFast settings cut build times by about 30 across the board:

SCons incremental build performance, click for full size

Ironically, the most impressive results are on clean builds (scons -c). GoFast settings cut build times by about 40 at the low end of the test range, and by more than 50 at the high end of the test range:

SCons clean build performance, click for full size

To my amazement, SCons with GoFast settings actually beats gmake on clean builds. My guess is that this is probably because SCons handles file deletion in-process, while gmake must invoke a separate process (rm).

That’s all folks!

That’s it for my analysis of SCons performance. Thanks to everybody who offered ideas for improving my benchmarks! I hope you found this series of posts interesting.

 


Build Acceleration and Continuous Delivery

Continuous Delivery isn’t continuous if builds and tests take too long to complete. Learn more on how ElectricAccelerator speeds up builds and tests by up to 20X, improving software time to market, infrastructure utilization and developer productivity.

Follow me

Eric Melski

Eric Melski was part of the team that founded Electric Cloud and is now Chief Architect. Before Electric Cloud, he was a Software Engineer at Scriptics, Inc. and Interwoven. He holds a BS in Computer Science from the University of Wisconsin. Eric also writes about software development at http://blog.melski.net/.
Follow me

By continuing to browse or by dismissing this alert you agree to the storing of first- and third-party cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. See privacy policy.