Y Quick Notes on Improving C++ Build Times – orin tresnjak

Quick Notes on Improving C++ Build Times

If you’ve worked on nontrivial C++ projects, you probably have experience waiting for extremely slow compiles. Slow compiles are an even bigger problem than they might seem to be at first; they can significantly reduce the number of edit->compile->debug cycles you can get through in a day, thus reducing the overall speed of development a lot more than you might think. (Besides, if you’re anything like me, waiting more than a few minutes for a compile can make you lose your train of thought.)

At my last two jobs, I joined large, legacy C++ projects that had extremely long compile times. From that experience, I’ve learned to make improving compile times one of the first priorities I address on a new project, since it benefits everyone on the team (and it’s a nice way to show your worth as a new hire early on, too). In this post, I’ll share some basic techniques I’ve used to do this. This will be a very basic introduction, but if you’re unfamiliar with this kind of thing (and I’m often surprised by how many C++ programmers are), hopefully you’ll find it helpful.

(To be clear, I’m not an expert on this subject by any means, but I’ve picked up enough tricks to be useful!)


1 Parallel Compilation

The first thing you should do is making sure your project is taking full advantage of your multi-core CPU when building. In Visual C++, you can find this setting under C/C++ -> General -> Multi-processor Compilation.

Assuming it’s not already enabled, this is the quickest thing you can do to massively improve your build times, and, as of 15.8.2 at least, Visual Studio doesn’t appear to enable this by default for new projects.

2 Header Improvements

One thing that can significantly affect compile times is the size of your translation units. A translation unit in C++ is the contents of a source file after preprocessing – meaning the file itself, plus all the headers it includes, plus all the headers those headers include, etc. As you can imagine, indiscriminate use of #include directives can make your translation units quite large (especially once you start including system headers like Windows.h).

Good news, though! There are a variety of techniques you can use to address this problem. The first is to reduce the number of #include directives you’re putting in headers–often instead of including something in a header, you can get away with only putting forward declarations of the items you need in the header, and putting the actual #include in the source file. This can significantly reduce the cascade effect of headers pulling in other headers.

Ideally, you want “avoid includes in headers” to be one of the coding standards in your organization going forward, to prevent this problem from rearing its head again.

3 Precompiled Headers

Precompiled headers provide a mechanism to compile frequently-used headers only one for the program, which can result in a pretty significant build time boost. The trick here is to only include headers in the PCH that are (1) frequently used and (2) rarely changed. (You want headers in your PCH to rarely change because whenever one of them is changed, the PCH for that project will need to be recompiled, which will reduce the build time benefits of having it.)

I like to write a quick Python script to parse through all source and header files in my projects, tallying up which ones are used the most often. Then, I can look through that list, adding the ones I expect to change rarely to the precompiled header file.

4 Unity Builds

Unity builds are the 400-lb gorilla of C++ build time optimization. It’s a heavyweight technique that comes with some major consequences. I don’t necessarily suggest using them because of the downsides, but if you’ve done everything else you can and your builds are still taking longer than you’d like, they’re an option.

Essentially the idea of unity builds is to radically reduce the number of translation units you have by combining multiple source files into one unit. That is, you create one (or a few) unity source files that #include the other source files in your project. The unity files are then the only ones that you actually compile (you exclude the rest from the build).

The result of this is that you end up with a very small number of translation units, thus incurring much less per-unit overhead during compilation. In particular it almost entirely eliminates linker overhead, which can be a huge improvement.

All this blazing speed comes at a cost, though– all your source files are now essentially combined into one giant unit, which can result in symbol collisions if you have symbols in different source files with the same names. using namespace directives can also cause issues, since they’ll apply to code you didn’t intend them to (but it’s good practice to only use those inside functions rather than at the source file level, anyway).

Further, changing a single source file will now require the entire unity file it’s included in to be recompiled. (In my experience this additional incremental cost isn’t a huge issue, but you can temporarily remove files you’re actively working on from the unity build to get around it.)

You can combine Unity builds with parallel compilation by having a small set of unity source files instead of just one, though you may want to tune what’s included in each so that they take roughly the same amount of time to compile.

This post has a pretty good breakdown of the pros and cons: http://onqtam.com/programming/2018-07-07-unity-builds/

(By the way, unity builds have nothing to do with the game engine of the same name; the term predates the engine.)

5 Faster PCs

I know this one seems obvious, but hear me out.

A lot of developers these days prefer to work on laptops, which makes sense– it’s convenient to be able to work wherever you need to. That said, laptop CPUs tend to be significantly slower and usually have fewer cores than desktop PCs, so you’re sacrificing a lot of build performance for convenience.

As an example, one of the projects I’ve worked on at my current job had full rebuild times around 8 and a half minutes on the laptop I was given when I joined (a fairly powerful Dell XPS). Using some of the techniques above, I managed to get the builds down to a hair over 2 minutes.

I later moved to a desktop development environment on a fairly high-end machine, which brought the compiles down to a brisk 39 seconds. It’s really nice to be able to do a full rebuild in under a minute–my mental flow gets disrupted a lot less often.

The moral of the story is, it’s worth investing in your development hardware, and convenience comes at a cost. 🙂

Addendum #1: #pragma once

Use it, seriously. Yes, it may not technically be standard C++, but it’s supported by VC++, clang, and gcc (which together cover the vast majority of modern platforms). Compared to include guards, it’s easier to use, easier to read, and almost impossible to screw up.

Addendum #2: taking breaks

I’ve occasionally heard (facetious) complaints that extremely fast compiles mean you get less time to take breaks. You can still take breaks, of course, you just have to be intentional about it rather than waiting for a long compile time to come around. 🙂

Addendum #3: IncrediBuild

I’ve used Incredibuild at some past jobs, but generally I’ve found it not to be worth it. It can help if your build times are horribly slow, but if you apply the above techniques you’re likely to get to the point where the overhead it introduces actually makes your build times worse. (That said, I don’t have experience with other distributed build systems, so I can’t say whether this applies more broadly or just to Incredibuild in particular, and to be entirely fair, I also haven’t used it in a few years, so maybe it’s improved since!)


I hope that’s helpful! As mentioned, I’m not an expert on this, but knowing a few basic build time optimization techniques can do you and your team a lot of good.

© 2019 orin tresnjak . Powered by WordPress. Theme by Viva Themes.