When is Preflight a Good Idea?


In my previous employment, one director issued a mandate to have “100 successful builds on trunk” and got this approved all the way up to the VP level.

At first glance this seemed like a “great” mandate.  Before this was put in place, builds were constantly broken, and often for weeks on end.  However, as I was the one of the people tasked with making this happen, I ran into several hurdles along the way.

Preflight seemed to be very helpful, and sounded like an amazing idea. The concept was to give the developer a way to test the build (and tests!) on the production environment before checking in code, done – obviously – ensure that after check-in the code will work because – you have already tested it. Right?

In reality, it didn’t work so well.

The biggest hurdle we ran into was multiple developers making changes to the same code.  Take this scenario, for example:

  • Developer A makes changes to code and runs preflight.  The preflight is successful, all goes well and code is checked in. Developer A continues writing code.
  • Developer B makes changes to some of the same files, runs preflight and it fails!  Oh no!  Developer B does not check in code, and works on implementing fixes.
  • Developer A finishes a new set of changes and runs preflight, while developer B is still fixing problems with his/her code.  Developer A’s preflight passes and all goes well.
  • Developer B finally fixes his/her code and runs preflight successfully.  However, when developer B tries to check-in the code, he notices conflicts because Developer A had already changed the same code.  Developer B must now fix these conflicts.  If Developer B is smart than he fixes them properly and code doesn’t break, but does developer B also run the same tests developer A ran to make sure he didn’t break developer A’s changes?

I realize the above scenario can (and does) happen with or without preflight.  However, in my experience, it happened much more often when using preflight.  The reason for that was that preflight added an extra ten or more minutes to check-in’s.

Quite often, Developer B would overwrite Developer A’s changes as a response to his manager asking why it took so long to check-in his changes. Later, during release testing, Developer A could not understand why the changes he was SURE he checked in after running a clean preflight had vanished into thin air.

Given this situation, I devised a much better approach: rather than worrying about broken builds, focus on fast (and if possible, automatic) recovery after a build breaks. It took some work to convince management of this. But the result was faster and cleaner development, rather than worrying about “never breaking” a build.  In a utopian scenario builds will never break, but in reality they always do.  I achieved the best results by determining why the build broke, and putting in place methods to ensure that break didn’t happen again, or that if it did we could recover rapidly.

Regarding preflights, they were most useful in these situations:

  • Shared codebase, but each developer only develops on one platform.  Run a preflight on all platforms with basic tests to make sure your code works on all platforms.
  • Sensitive areas of code that have numerous dependencies and builds often break as a result of changing this code.  Run a pre-flight after changing these areas of the code and possibly try to re-design the code so it is not so sensitive and doesn’t break as often

Preflights are not as useful in the below scenario:

  • For every check-in (I realize some will disagree and say this should be ideal, however I found that it slowed development down and actually caused more errors than resolved them).  IF you had a code-base that the build is only minutes (or seconds) then yes certainly run pre-flights.  But, I rarely see builds that are this fast.

The most important way to be TRULY agile is to be agile in fixing broken builds. Recovery from a broken build should happen in minutes, not hours.  Companies should focus less in trying to ensure there are “no broken builds” and more in automatically recovering rapidly from broken builds.

A well designed CI system should be able to detect broken builds and automatically take the needed actions to fix them, e.g. rolling back changes that broke the build, figuring out if the build is broken due to infrastructure (disk space full, server timeout, memory error, etc.), and resolve those changes as automatically as possible. It should also, of course, inform all needed parties that the build broke so it can be fixed right away.

In reality this is very much a cultural issue in that we are conditioned to think “red is bad, so we must have no red,” when in reality it should be “red is bad if it stays red for a long time.”

When management starts putting in place directives for “no broken builds” this causes people to either stop making changes, or to develop in a much less than agile manner in order to “not break the build.”  In many ways, the fear of breaking a build often hinders creativity.

Chris Fulton

Chris Fulton

Chris Fulton is a Global Technical Account Manager at Electric Cloud.He is a new father, technology geek, travel enthusiast, and is especially interested in new and emerging solutions to build, package, and deploy software in the global economy of today.He has an extensive background in improving complex build and release processes.
Chris Fulton

Share this:

4 responses to “When is Preflight a Good Idea?”

  1. Eric Minick says:

    Yes. Yes. Yes.

    Preflight builds delay integration and are therefore guilty until proven otherwise. They get un-evil when a build on the developer’s machine is not a good predictor of things working more generally. That’s basically in the two cases you laid out.

    Builds can break. But a healthy culture fixes them.

    Frankly, builds are a test (does the code compile) and tests that never fail aren’t delivering value.

  2. Chris Fulton says:

    Amen to the “builds are a test.” I would even go one step further from saying “builds can break” to say “builds do break.” If a build never breaks, then you likely didn’t change anything significant or of value. What’s truly important is making sure that one can rapidly recover from those breaking points.

  3. I agree with “builds are a test”. The build and the test suite are a feedback mechanism and to they should fed-back the team to work on fixing issues as soon as they happen.

  4. Ian MacMillan says:

    I was reading this and thought, “what they need is to employ an agile methodology” then I got to the line “…to be TRULY agile…”!
    So, developers are working on the same piece of code and are unaware of each other? Distributed teams or bad stand-ups? The tests the developer writes are not being checked in with the code?! If they were, developer B, when making changes to developer A’s code would be running A’s tests along with any B has written when changing that code. Perhaps even some extreme programming with A and B taking turns driving and navigating.
    As for code that is highly coupled you absolutely want to be refactoring if you are in there making changes, and you absolutely want to introduce tests there that will go along with any changes. You guys could benefit from a visit from a guy like Kevin Klinemeier ( @agilekevin ) to get things back on an even keel.
    I can see the huge downside of a process that takes multiple minutes on top of a lengthy build (ours are about 45 minutes), where simple tests that take seconds can really turn around the productivity in the long run.
    If we break a build it’s usually donuts all around, and with multiple sites, that can be a real motivation to pay attention to detail and ensure things are healthy.
    We are striving for more or less “continuous” integration and perhaps a gated check-in, to a mirror of trunk that runs on a mirror of production so that tools like Jenkins can beat on it before it goes to trunk. Virtual machines have been a big help there now that they can almost match actual hardware.
    Keep up the good work and pick your battles; you’ll come out on top in the end.

Leave a Reply

Your email address will not be published. Required fields are marked *


Subscribe via RSS
Click here to subscribe to the Electric Cloud Blog via RSS

Subscribe to Blog via Email
Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Continuous Delivery (#c9d9) Podcast

c9d9 Continuous Discussion on Agile, DevOps, and Continous Delivery

Next episode:

Episode 93:
2018 Year In Review

December 11, 10am PT

By continuing to browse or by dismissing this alert you agree to the storing of first- and third-party cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. See privacy policy.