How scalable is SCons?

The marquee feature in ElectricAccelerator 5.0 is Electrify, a new front-end to the Accelerator cluster that allows us to distribute work from a wide variety of processes in addition to the make-based processes that we have always managed. One example is SCons, an alternative build system implemented in Python that has a small (compared to make) but apparently growing (slowly) market share. It’s sometimes touted as an ideal replacement for make, with a long list of reasons why it is considered superior. But not everybody likes it. Some have reported significant performance problems. Even the SCons maintainers agree, SCons “Can get slow on big projects”.

Of course that caught my eye, since making big projects build fast is what I do. And you can’t really practice Continuous Delivery of anything, if you’re stuck waiting hours for your builds to run. What exactly does it mean that SCons “can get slow” on “big” projects? How slow is slow? How big is big? So to satisfy my own curiosity, and so that I might better advise customers seeking to use SCons with Electrify, I set out to answer those questions. All I needed was some free hardware and some time. Lots and lots and lots of time.

The Setup

My test environment was as follows:

  • RedHat Desktop 3 (kernel version 2.4.21-58.ELsmp)
  • Dual 2.4 GHz Intel Xeon with hyperthreading enabled
  • 2 GB RAM
  • SCons v1.2.0.r3842
  • Python 2.6.2

The test build consists of (a lot of) compiles and links. Starting from the bottom, we have N C files each with a unique associated header file. The C files and headers were spread across N/500 directories in order to eliminate filesystem scalability concerns. Both the C files and the header files are trivial: the header only includes stdio.h; the C file includes the associated header and a second, shared header, then defines a trivial function. Objects are collected into groups of 20 and stored into a standard archive. Every 20th object is linked into an executable along with the archive. The build is generated using a script written by one of our talented QA engineers for testing Accelerator.

Overall build time

The primary concern was naturally end-to-end build time for a from-scratch build. I used the standard Linux time utility to capture this data, and averaged the results from two runs (except for the build with 40,000 C files, because that just took too long):

Overall build time

That doesn’t look too good! In fact, that curve looks suspiciously like the function f(x) = x2. Let’s plot that on our graph:

Overall build time compared to x-squared growth

That looks like a pretty close fit — so it seems that the build duration increases in proportion to the square of the number of input files. That’s bad news — as you can see, that very quickly adds up to outrageously long build times.

Ruling out other causes

It’s possible that this performance degradation is due to some factor other than SCons. To rule out that possibility, I created a shell script that runs the exact set of commands, in the same order, that the SCons-based build had, and timed the execution of that script. The work done by that script is the actual work of the build: the bare minimum that must be done to compile and link all of the source files. By definition, everything else is overhead. Let’s add that data to the graph:

Overall build time compared to actual work

As expected, the time needed for the actual work grows linearly in proportion to the number of C files in the build. That means that the performance degradation is not due to some other component of the system — if it were, we would have seen a similar problem with the simple shell script. Instead, the problem is clearly in SCons itself.

Comparing overhead to actual work

Now that we know the amount of time for the actual work, we can compute the amount of time spent on overhead introduced by SCons — that’s just the difference between the “overall build time” and the “actual work” lines in our graph. For example, with 40,000 C files, the SCons build time is about 4 1/2 hours; the actual work time is about 25 minutes. SCons is adding more than four hours of overhead to the build!

Let’s put that into terms that are a little easier to grok: rather than looking at the absolute numbers, we’ll look at the overhead as a percentage of the total build time.

Overhead from SCons as a percentage of overall build time

Even with only a few hundred files, SCons overhead represents 50 of the total build time; with 10,000 C files, SCons overhead represents 75 of the total build time; and with 40,000 C files, SCons overhead accounts for a whopping 90 of the total time!

Memory usage

The final metric that I tracked was SCons memory utilization, using the built-in –debug=memory flag. This metric is of particular interest to me, since I’ve spent a lot of time streamlining Accelerator’s memory usage so that it can accommodate truly enormous builds (millions of compiles). After the disastrous build time results, I was relieved to see that memory usage seems to grow linearly with the number of source files in the build (NB: here I’m counting total source files, including both C files and headers, not only C files):

SCons memory usage

Unfortunately, although the growth is linear, the rate of growth is quite high: each additional source file adds more than 19,000 bytes (!) to the memory footprint. At that rate, SCons will exhaust the available memory address space for a 32-bit process at only about 110,000 total source files.


These results paint a pretty grim picture for SCons: based on the overall build times, I can’t imagine anybody seriously using SCons for builds with more than a couple thousand files. And even if you were willing to put up with the long builds, the memory usage data indicates that SCons has a hard limit of around 110,000 total source files.

Are there any SCons experts out there able to explain why SCons seems to perform so badly here?


Build Acceleration and Continuous Delivery

Continuous Delivery isn’t continuous if builds and tests take too long to complete. Learn more on how ElectricAccelerator speeds up builds and tests by up to 20X, improving software time to market, infrastructure utilization and developer productivity.

Follow me

Eric Melski

Eric Melski was part of the team that founded Electric Cloud and is now Chief Architect. Before Electric Cloud, he was a Software Engineer at Scriptics, Inc. and Interwoven. He holds a BS in Computer Science from the University of Wisconsin. Eric also writes about software development at
Follow me

Share this:

27 responses to “How scalable is SCons?”

  1. […] How scalable is SCons? « The Electric Cloud Blog […]

  2. Eric says:

    Have you asked your question to the scons people? The system should be structured so you don’t rebuild the whole thing everytime.

  3. Magnus 'matricks' Auvinen says:

    Years ago I started using scons and fell in love with how it does it’s build scripts. I was however quite quickly turned off by the overhead that the system has. Sure, it does everything very correctly but I thought I could sacrifice some off that accuracy for speed.

    I started writing my own build system that I now call bam ( ). It works in a similar way but written in C with Lua scripts as build scripts. I did similar tests that you did here but I did it with bam. I know that I made bam fast, but this kinda surprised even me. I did a test with 10k files. I did two runs of each, bam and sh. (No threading was enabled). The sh file does the same work as bam does except bam calculates dependencies etc.

    bam: 4m12.032s, 4m16.840s
    sh: 4m24.514s, 4m24.351s

    Bam is faster then sh and it does all the dependency checking etc as well. Kinda scary :). It needs a bit more love however but there are a bunch of people who uses it right now.

    • Eric Melski says:

      Can you explain that result? How is it possible that bam is faster than the straight-line shell script? Are you certain the comparison was fair (same outputs produced, etc)?

      As they say, if it sounds too good to be true, it probably is.

  4. Magnus 'matricks' Auvinen says:

    Thought it was weird as well, but I didn’t have time to investigate it further at the time. So when I got home and had some time over I looked into it more.

    They run the same commands, they generate the same thing. So I thought maybe sh does something quite expensive for each script line or something. So I did a c application that just ran all the commands via the system() function. Bam uses system() as well to call the compilers and tools. That application got the same time result as sh so something is very wrong, but bam produces the right now.

    Then it hits me. After fiddling with a BIOS setting I got the more realistic results. I didn’t have time to do more then one run but here are the results:

    bam: 5m14.131s
    sh: 4m32.506s

    So what BIOS option did I change? I switched of the TurboBoost functionality my i7 CPU. Some corner case were the boost kicks in on bam but not on sh.

    That feature is a nasty one if you wanna do benchmarks :)

  5. Eyob says:

    When you measured the build time for scons build I am assuming it is a clean build. But what is the aggregate time when you include change build?

    So take the 15000 file point for instance. If you do 10 builds each time changing a specific file f1, what is the t1+t2+…t10 where tn is build time for the n’th build?

  6. Eric,

    I’d love to see a similar analysis of a build with Make. Do you have such an analysis posted?


    • Eric Melski says:

      @Vic: I’m working on a similar article about make, but it’s been delayed by my other responsibilities as we put the finishing touches on the next release of ElectricAccelerator. Do check back, as I hope to put the info up soon!

      Thanks for the comment.

  7. czrpb says:

    I think this is an incorrect assumption: “The primary concern was naturally end-to-end build time for a from-scratch build.”

    Is it not the case that one of the ‘features’ of EC is “build avoidance” yes? So as Eyob suggests is it not more common to rebuild only small portions of one’s source? How fast (or slow! wink!) is it then?

    This kinda feels like someone writing a program in C with full optimizations then comparing that to an equivalent Python program. C wins — hmm, sure: The value is Python is not speed.

    • Eric Melski says:

      @czrpb: the point is not that SCons is slow compared to some other program. The point is that in absolute terms it is unacceptably slow for any project of substantial size. Maybe you’re willing to burn an extra four hours on a build just because SCons is so slow, but that just doesn’t work for me.

      I’ll try to include some stats on “no-touch” and “one-touch” builds the next time I look at SCons performance.

      Thanks for the comment!

  8. TimM says:

    As far as I am concerned “big” is between 50,000 and 80,000 compile jobs. It takes significant time for make “engines” to even read one of these makefiles, let alone build them.

    Until every part of the process can be distributed (I don’t just mean threads on the same machine) it won’t even have a chance of scaling.

    Build avoidance in these cases is a joke – takes so long to find out if you need to rebuild that you might as well just do it. There has to be a better solution, but unfortunately if it’s both proprietary and locked in then it’s useless.

    I think it would only work if the build system was a continuously running database, hooked into change notification systems like that of the filesystem or the source control system.

    Scons is lovely – I wish it had been thought up with scalability in mind from the start. But to be fair, it works for some people.

  9. czrpb says:

    @Eric: I heard you and my comment was purposefully contentious! grin!

    But you totally glossed over the objection (by more than just myself) that building everything is an invalid use case, or at least not a common developer use case.

    Plus, imagine I have a system where developers can do builds of their partial changes in say 20mins. And everyday there is an incremental central build –maybe at 2am — which builds all updates and it takes 4hrs. No developer I know will care as that is not what they *do* minute-by-minute.

    Thanks for replying!!

    • Eric Melski says:

      @czrpb: that’s a wonderful rosy world you live in, where you only ever have to do that full build overnight when nobody is waiting for it! Bummer for you when you are hours away from a release date and QA discovers a bug that requires release engineering to regenerate the build. I’ve never seen an organization that uses an incremental build for the final release bits, so your entire organization ends up hanging on the completion of that full build. How happy will your VP of Engineering be that day when he has the CEO breathing down his neck, and you tell him, “Well, the build won’t be ready for another four hours… then QA will have to verify the fix and rerun the smoke tests…”

      I don’t disagree that developers often do incremental rather than full builds. But that in no way invalidates the tests I did here, nor the results I obtained. You may not care about the full-build time, but there are plenty of people who care about it very much.

  10. czrpb says:

    @TimM: “Build avoidance in these cases is a joke – takes so long to find out if you need to rebuild that you might as well just do it. … I think it would only work if the build system was a continuously running database, hooked into change notification systems like that of the filesystem or the source control system.”

    Yes (mostly)!! Once you do the hook-up between your build and your SCM, build avoidance of various hierarchies (ie. any of whatever hierarchies one’s source is organized in) can be pretty simple, IMHO and IMHE.

  11. czrpb says:

    @Eric: Ah! Excellent! “I’ve never seen an organization that uses an incremental build for the final release bits…”: We do!

    And it is awesome! In our case, if there is 1line of code that has changed, one particular hierarchy is rebuilt. Our build also creates the packaging for that. Then, essentially, we take this new package and all the other packages from the prior release and deliver them in an SDK. Pow! Done! grin!

    • Eric Melski says:

      @czrpb: That must create some interesting challenges with respect to managing the build artifacts. I also wonder how many developers you have on your project, and what build tool you’re using that you have such confidence in the incremental build features.

      In any case, I think you and your team are the exception, rather than the norm. Bravo on setting up such a robust build system. Thanks for your comments!

  12. walt says:

    @czpb: You must have a really well designed build setup to have that level of trust in it.

    I guess more likely you know exactly what changed and what needs to rebuild based on that.

    In our case we won’t always recompile everything either, but that’s mostly in the case where an application .c file changed (rather than an include or a library file) or if it’s just a shell or Perl script that changed.

  13. czrpb says:

    @walt: That is true: In a sufficiently strict environment — such as for security, legal or export reasons — you can imagine that such traceability is *required* in the end-2-end solution.

    @Eric: Thanks!! We have worked hard to achieve this. I wish I could talk more, especially technically. My comments here should not be taken to mean I am unimpressed with some of EC’s tech! (I meant mostly to address the particular use case run against scons! wink!)

    Thanks everyone for allowing me to participate!

  14. Mikhail says:

    The results of this exercise look impressive and obviously not in favor of Scons. At the same time the comparison to the results of the shell script running the same commands in the same order as the Scons-based build has a limited value to anyone except probably the Scons developers. If you are so smart as to write a shell script that does exactly what is needed to build your project then you don’t need any build system whatsoever, Scons included. Probably in some situations this degenerate special case (building from scratch or full build) is important enough to warrant a special script/rule/whatever in your build system but you have to take a burden of maintaining consistency between incremental/full builds then.

    What would be much more interesting and useful is to compare to some real build system i.e. Make. Plus any other use cases besides build from scratch would be extremely useful. Supposedly you performed this experiment using scripts of some sort, then it should not be too hard to adapt them for Make as well. Without that it looks just like Scons bashing.

    • Eric Melski says:

      @Mikhail: obviously a shell script is not a substitute for a full-featured build system. I never said it was. However, I think it is important to determine what portion of the total build time comes from the actual work of the build (compiles, links, etc), versus what portion comes from the build system itself. I can’t think of a better way to measure that than by comparing the SCons time to the shell script time. Do you have an alternative suggestion, or are you saying that you think this metric is completely irrelevant?

      I do hope to put up a similar article based around make. In fact, I’ve already collected the data for that article. The problem is that blogging is not my primary responsibility at Electric Cloud, so unfortunately I don’t get as much time to work on it as I would like.

      Thanks for your comments!

  15. rotinom says:

    Just ran across this, and not sure if anyone can benefit, but I have deployed SCons in a large environment, with good luck. I don’t have a file count, but the SLOC count was >1 million lines of C, Fortran, C++, and a smattering of “other”, and over 100 individual SConscript files.

    Prior to SCons, we had developers using makefiles, and everything was single-threaded. Processes were not in place to ensure that the makefiles would work multi-threaded, it it was a very organic project, with 15-20 years of development effort.

    SCons was put in place over a small period of time, and the automatic dependency analysis, and -j functionality reduced our build times by an order of magnitude, and ensured that the project was correct every single time.

    So, call us a success. We had a great deal of “correctness” issues, as well as built speed issues, and SCons solved both. Granted, they could probably get faster, but the ease of implementation, and general flexibility and extensibility were incredible.

    The biggest complaint, is on a project of that size, you have about a minute from when you type ‘scons’ to when it starts building something. If you are just working on getting a single file built (say, working through syntax issues), it can be tedious to do, but you can just copy & paste the command line, and side-step the build system at least.

  16. […] not when you look at its performance, or rather, lack of performance. According to Eric Melski’s numbers, SCons’s runtime seems to grow as the square of the number of things being built, which is […]

  17. gavenkoa says:

    How about GNU Make scalable?

    If it equal to sh?

  18. gavenkoa says:

    > How about GNU Make scalable?

    Wow! I found your second article, that answer question:

  19. […] How scalable is SCons?: 6 of page views […]

Leave a Reply

Your email address will not be published. Required fields are marked *


Subscribe via RSS
Click here to subscribe to the Electric Cloud Blog via RSS

Subscribe to Blog via Email
Enter your email address to subscribe to this blog and receive notifications of new posts by email.

By continuing to browse or by dismissing this alert you agree to the storing of first- and third-party cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. See privacy policy.