In my previous employment, one director issued a mandate to have “100 successful builds on trunk” and got this approved all the way up to the VP level.
At first glance this seemed like a “great” mandate. Before this was put in place, builds were constantly broken, and often for weeks on end. However, as I was the one of the people tasked with making this happen, I ran into several hurdles along the way.
Preflight seemed to be very helpful, and sounded like an amazing idea. The concept was to give the developer a way to test the build (and tests!) on the production environment before checking in code, done – obviously – ensure that after check-in the code will work because – you have already tested it. Right?
In reality, it didn’t work so well.
The biggest hurdle we ran into was multiple developers making changes to the same code. Take this scenario, for example:
- Developer A makes changes to code and runs preflight. The preflight is successful, all goes well and code is checked in. Developer A continues writing code.
- Developer B makes changes to some of the same files, runs preflight and it fails! Oh no! Developer B does not check in code, and works on implementing fixes.
- Developer A finishes a new set of changes and runs preflight, while developer B is still fixing problems with his/her code. Developer A’s preflight passes and all goes well.
- Developer B finally fixes his/her code and runs preflight successfully. However, when developer B tries to check-in the code, he notices conflicts because Developer A had already changed the same code. Developer B must now fix these conflicts. If Developer B is smart than he fixes them properly and code doesn’t break, but does developer B also run the same tests developer A ran to make sure he didn’t break developer A’s changes?
I realize the above scenario can (and does) happen with or without preflight. However, in my experience, it happened much more often when using preflight. The reason for that was that preflight added an extra ten or more minutes to check-in’s.
Quite often, Developer B would overwrite Developer A’s changes as a response to his manager asking why it took so long to check-in his changes. Later, during release testing, Developer A could not understand why the changes he was SURE he checked in after running a clean preflight had vanished into thin air.
Given this situation, I devised a much better approach: rather than worrying about broken builds, focus on fast (and if possible, automatic) recovery after a build breaks. It took some work to convince management of this. But the result was faster and cleaner development, rather than worrying about “never breaking” a build. In a utopian scenario builds will never break, but in reality they always do. I achieved the best results by determining why the build broke, and putting in place methods to ensure that break didn’t happen again, or that if it did we could recover rapidly.
Regarding preflights, they were most useful in these situations:
- Shared codebase, but each developer only develops on one platform. Run a preflight on all platforms with basic tests to make sure your code works on all platforms.
- Sensitive areas of code that have numerous dependencies and builds often break as a result of changing this code. Run a pre-flight after changing these areas of the code and possibly try to re-design the code so it is not so sensitive and doesn’t break as often
Preflights are not as useful in the below scenario:
- For every check-in (I realize some will disagree and say this should be ideal, however I found that it slowed development down and actually caused more errors than resolved them). IF you had a code-base that the build is only minutes (or seconds) then yes certainly run pre-flights. But, I rarely see builds that are this fast.
The most important way to be TRULY agile is to be agile in fixing broken builds. Recovery from a broken build should happen in minutes, not hours. Companies should focus less in trying to ensure there are “no broken builds” and more in automatically recovering rapidly from broken builds.
A well designed CI system should be able to detect broken builds and automatically take the needed actions to fix them, e.g. rolling back changes that broke the build, figuring out if the build is broken due to infrastructure (disk space full, server timeout, memory error, etc.), and resolve those changes as automatically as possible. It should also, of course, inform all needed parties that the build broke so it can be fixed right away.
In reality this is very much a cultural issue in that we are conditioned to think “red is bad, so we must have no red,” when in reality it should be “red is bad if it stays red for a long time.”
When management starts putting in place directives for “no broken builds” this causes people to either stop making changes, or to develop in a much less than agile manner in order to “not break the build.” In many ways, the fear of breaking a build often hinders creativity.
Latest posts by Chris Fulton (see all)
- Continuous Discussions (#c9d9) Podcast, Episode 61: Mapping DevOps into an ITIL Framework - January 31, 2017
- When is Preflight a Good Idea? - October 1, 2014