Integrating C++ header units into Office using MSVC (1/n)

Cameron DaCamara

Zachary Henkel

C++20 has had a lot to offer and one feature in particular requires the most thought of all when integrating into our projects: C++ modules (or C++ header units in this particular case). In this blog we will show a real world case of integrating a new C++20 feature into a large codebase that we might all be familiar with. Just a few notes:

  • This blog is authored in a style of two perspectives:
    • Zachary Henkel’s perspective will be in black text.
    • Cameron DaCamara’s perspective will be in accent text.
  • This blog is the first in a series detailing experiences in integrating header units into the Office codebase.
  • The first blog here is very early results so do not expect to see very fine-grained numbers.

Without further delay, let us jump right in!

Overview

How MSVC enables header units in a multi-platform codebase

C++20 header units are a way to receive many of the benefits of modules, while still working with a codebase that was designed for classic header inclusion. The benefits of header units looked appealing to Office, but we weren’t willing to extensively add platform #ifdefs for the sake of consuming them. Fortunately, the C++ standard anticipated this scenario! The flag MSVC /translateInclude will automatically translate a textual inclusion to a header unit import when it encounters a #include specified in a command line mapping to the compiler. This allows Office to build and consume header units without any code changes at all!

It is worth noting that the /translateInclude feature is a standard-supported feature, [cpp.include]/7. Include translation allows the implementation to replace #include with import. It is essential for gradual codebase migration that there be a bridge such as include translation to enter the world of modules. More importantly, a header unit can do something that a PCH cannot: it can be moved around from project to project and reused! So not only does the integration of header units require no source-level changes, but the build throughput potential for multiple projects is far beyond what PCH can offer. One more point to header units compared to PCH: they’re small, really small compared to PCH. In our measurements, a header unit can often times be 10x smaller than an equivalent PCH in content.

Selecting header unit candidates

Once Office knew we wanted to use header units, the challenge became finding a good set of candidate headers. Our low-level shared code components, that we’ve termed liblets, have properties that make them attractive header unit blocks. All a liblet’s headers exposed as dependencies need to be self-contained and this property is enforced by existing tools! Secondly, liblets clearly express their dependencies. This allows us to build an acyclic graph of dependencies and build up sets of header units.

Baby Steps

The first step Office took was simply to determine if we could build a header unit! This Microsoft docs page has the full steps but summarized here are the key points:

  • Turn on module support. Ideally Office would be compiling all code as C++20 and header unit support would be available automatically. Unfortunately, due to the size of the Office codebase, we’re not ready to make the switch. Luckily Office already compiles with the compiler flag /permissive-. By running the compiler in standards conformance mode Office is able to fall back on the -experimental:module flag for header unit support until the C++20 migration is complete. It’s important to note that this doesn’t do anything other than enable the feature, it won’t force any code to compile as header units without additional flags.
  • Determine where in the build tree we will create the IFC artifacts and pass that location to the compiler with /ifcOutput
  • We enabled some additional compiler flags for the purposes of testing, more on this later.

Right out of the gate Office hit a snag. Our headers weren’t as self-contained as we had assumed. Our tools that operate on headers in isolation rely on common textual includes being provided by the precompiled header! Until the work to break our precompiled header dependency is complete, we’re force-including (/FI) the precompiled header into each header being compiled as a header unit.

From this perspective, it is important that the compiler should make traditional PCH technology work along with the new modules technology (which is the same machinery the compiler uses to export and import header units).

The first compiler bug on deck was the fact that the front-end has a rather… interesting approach to PCH. The problem observed was that sometimes the compiler would error with “error C7612: could not find header unit for 'X'“. The most curious thing was that the command line had the reference to header unit ‘X’ clear as day. It was not until I debugged the front-end with PCH turned on where the PCH interacted with certain identifiers that I discovered the culprit. When the compiler starts, it allocates a table where hashed identifiers go, and when the compiler sees /headerUnit:quote X=X.ifc it will create hashed identifiers for the components of the name for fast comparison purposes. The problem is that this processing happened too early because in the presence of a PCH the compiler will memmap the PCH file and blit the memory directly into existing compiler heap locations… exactly where we just inserted our identifiers from processing the /headerUnit arguments, which means we also get bogus identifiers when trying to find the matching header unit name to IFC combo. One hefty comment and code rearranging later, our first bug was fixed!

The preprocessor presented another set of challenges. The set of preprocessor macros must be consistent between header unit creation and consumption. Inconsistent conditional compilation is forbidden in header units. Eliminating cases, such as the following, will be an ongoing concern for Office as header unit usage increases.

#if defined(Assert)
#define ASSUME( condition ) Assert( condition )
#else
#define ASSUME ( condition ) __noop()
#endif

The experiment to create as many header units as possible helped uncover many outstanding bugs with code that was never instantiated by the compiler.

One such example, this does not return on all code paths:

inline HRESULT sink(std::unique_ptr<Widget> widget)
{
#if DEBUG
  if (!widget) return E_POINTER;
#endif
  m_widget = std::move(widget);
}

Let’s talk about inline functions. Consider:

void undef(); 
inline void f() {
  undef();
}
int main() { }

You might be surprised to find that every major compiler vendor will accept the above C++ code and link the program successfully! It turns out that the standard wording here [dcl.inline]/5 is quite special, and it allows for a unique optimization which allows a compiler front-end to tell the back-end that the definition of a specific function is never needed because said function is never referenced, so this bespoke definition can be discarded altogether, which further implies undef is also never referenced leading to a well-formed program.

There has been a long-standing issue in the compiler where it was forced to emit every inline function definition into the associated .obj if the current translation unit was a module interface unit or a header unit. While this will still lead to correct program semantics, it has two major drawbacks:

  1. The compiler cannot enable the aforementioned optimization with inline functions, and
  2. This compiler behavior means that LTCG is required for inlining of functions declared inline and defined in the imported translation unit.

It turns out that Office relied on ‘1’, a lot. So, we just fixed the bug. During the Office header unit integration, the compiler provided a switch to enable the behavior of emitting every inline function definition into the IFC which would be loaded on-demand if it was used on the import side.

Office has been on a long journey to ensure all our code is standards compliant via the MSVC /Zc flags and /permissive-. The header unit pilot acted as a forcing function to accelerate our efforts to enable the standards conformant preprocessor (/Zc:preprocessor) globally.

Anybody familiar with the quirks of the traditional preprocessor in MSVC will understand that its behavior is something of an eldritch horror. Adding to that, the traditional preprocessor has virtually no data model besides walking blobs of text to create a new stream of text for the compiler’s tokenizer. The standard states that preprocessing should begin with a series of pp-tokens and end up with a series of lexical tokens which are then parsed. The new preprocessor (/Zc:preprocessor) allows the compiler to store these pp-tokens into the IFC in a principled way. These pp-tokens are needed as they compose the definition of object-like and function-like macros and is the reason why /Zc:preprocessor is required for header unit compilation.

Larger Steps?

Once Office could create header units it was time to consume them! Again, the Microsoft docs page contains the full details but the key steps for Office were:

  • Use the /translateInclude flag to avoid rewriting #include to import.
  • Pass the header unit mapping /headerUnit liblet/header.h=<ifc path>.
  • Decide what to do with the .obj file created as part of a header unit.

In Office we bundle all header unit mappings per-liblet into a single response file. We’ve found this to strike the right balance of dependency tracking despite potentially over specifying the number of header units being consumed.

Header unit consumption started revealing heterogeneous build flags throughout the product. Big pain points include static vs dynamic CRT (/MT vs /MD), unsigned char (/J) and /DUNICODE. It is possible to create a distinct header unit flavor for each combination, but so far, we’ve stuck to a single flavor with the most common options.

The obj file from a header unit is packaged into the pre-existing lib files to make consuming header units easy.

Putting it together

The most recent milestone hit in the pilot is to successfully compile and link three of Office’s shared code dlls. While building the dlls Office successfully compiled ninety distinct header units from pre-existing shared code headers. To aid integration header units are enabled for all of a project’s publicly shared headers at once. This means that even though each header unit that’s been created will be used somewhere in Office, there isn’t any guarantee it is consumed while building these specific dlls. To build the dll we tested 2/5 of the generated header units were consumed, as validated by the report generated by the new /sourceDependencies flag. We anticipate these numbers to increase significantly with some upcoming work to replace the compiler’s brittle legacy ODR validation logic.

The module machinery in the compiler has historically tried to use source locations to perform ODR matching (One Definition Rule). The problem, of course, is that the header units can be moved around and locations change from machine-to-machine (even project-to-project) so the strategy of trying to perform an ODR match based on source location can be quite fragile, as mentioned above. After working with Office it was abundantly clear that the compiler needs to adjust its strategy and perform more of a structural comparison when matching for ODR—a change which is still ongoing and targeting a future release. Please note that ODR checking for named modules do not have any of the problems above as they provide better ODR guarantees vs being reliant on source location.

Next steps

We are just as anxious to get build throughput metrics internally as we expect all of you to be. Once we have a broad set of header units being generated and consumed, we plan to do gather extensive build timing data. We want to see the effect building and consuming header units has on the build speed of both clean builds and incremental builds and we want to measure the effects both with and without our existing precompiled headers. Additionally, we need to determine if there is a high enough throughput gain to justify creating additional header unit flavors for currently incompatible compiler flags. Do we have enough projects that require the static CRT or build without Unicode support to spend the compilation time generating header units in those configurations?

Finally, we want to add support for consuming header units outside of Office’s shared code. We have a shared code architecture that makes it easy to create header units and track the downstream projects that depend on them. We need to extend the build system support that has already been created to date so that Office’s client apps can also see the benefits of header units. If you’d like to learn more about how Office architects its shared code, please catch the CppCon 2022 talk “How Microsoft Uses C++ to Deliver Office: Huge Size, Small Components” once it’s available online.

Office is nearly 100 million lines of native code and the compiler is seeing a lot of new code, which is both a blessing and a curse as it can halt development until a compiler fix is created but ultimately makes a product which is not only more robust for Office but for every customer using MSVC!

BUILD!

Closing

As always, we welcome your feedback. Feel free to send any comments through e-mail at visualcpp@microsoft.com or through Twitter @visualc. Also, feel free to follow Cameron DaCamara on Twitter @starfreakclone.

If you encounter other problems with MSVC in VS 2019/2022 please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions or bug reports, let us know through DevComm.

5 comments

Discussion is closed. Login to edit/delete existing comments.

  • Dwayne RobinsonMicrosoft employee 2

    This is great for such a large codebase to shine light on any dark corners and bring the feature to maturity. Keep it up 👍.

  • Koby Kahane 2

    In future posts I’d be glad to see:
    – More about mixing PCH and headers units and named modules in the same code base, as part of a gradual adoption strategy.
    – IntelliSense. For now it seems to be somewhere between brittle to severely degraded even just in the presence of /translateInclude. In particular it appears IntelliSense for named modules and header units only sees function parameter types from the IFC but does not seem to preserve the parameter names from the declaration, resulting in a severely degraded experience.
    – Consuming the Windows SDK headers as header units.

  • Paulo Pinto 1

    Interesting to read that Office is finally adopting C++20 modules.

    So far most module presetations tend to focus on command line applications with C++ standard library, when what Windows developers care about is C++20 modules support for Win32, MFC, ATL, C++/WinRT,…

    Error messages related to macro refinitions, slow builds due to conflicts between PCH and modules forcing to turn off PCH, no guidance how stuff like _ITERATOR_DEBUG_LEVEL will work on a module only world and so on.

    Naturally there are ongoing feeback times, some of them still open since modules were announced.

    Looking forward how Office team experience will help these issues to get clarified and future improvements on C++ modules support on Visual C++.

  • Antonis Ryakiotakis 1

    I am genuinely curious how full-compilation time is impacted when introducing header module units. In my own tests (where, admittedly cmake was used instead of directly using MSVC), compilation of header module units plus calculation of module dependencies introduced a significant amount of latency where the build ended up taking almost double the time. For a huge project like Office I assume there would be a great penalty. Did you mitigate that somehow?

    • Zachary HenkelMicrosoft employee 0

      Stay tuned for future entries in the series!

Feedback usabilla icon