Thanks to author for doing some solid work in providing data points for modules. For those like me looking for the headline metric, here it is in the conclusion
While the evidence shown above is pretty clear that building a software package as a module provides the claimed benefits in terms of compile time (a reduction by around 10%, see Section 5.1.1) and perhaps better code structure (Section 5.1.4), the data shown in Section 5.1.2 also make clear that the effect on compile time of downstream projects is at best unclear.
So, alas, underwhelming in this iteration and perhaps speaks to 'module-fication' of existing source code (deal.II, dates from the '90s I believe), rather than doing it from scratch. More work might be needed in structuring the source code into modules as I have known good speedup with just pch, forward decls etc. (more than 10%). Good data point and rich analysis, nevertheless.
It wouldn’t surprise me if they could do better if they gave up on doing most of the work programmatically.
One part of me agrees with (both from the paper)
> For example, putting a specific piece of code into the right place in each file (or adding necessary header files, as mentioned in Section 5.2) might take 20-30 seconds per file – but doing this for all 1051 files of deal.II then will take approximately a full day of (extremely boring) work. Similarly, individually annotating every class or function we want to export from a module is not feasible for a project of this size, even if from a conceptual perspective it would perhaps be the right thing to do.
and
> Given the size and scope of the library, it is clear that a whole-sale rewrite – or even just substantial modifications to each of its 652 header and 399 implementation files – is not feasible
but another part knows that spending a few days doing such ‘boring’ copy-paste work like that often has unexpected benefits; you get to know the code better and may discover better ways to organize the code.
Maybe, this project is too large for it, as checking that you didn’t mess up things by building the code and running the test suite simply takes too long, but even if it seems to be, isn’t that a good reason to try and get compile times down, so that working on the project becomes more enjoyable?
I’ve tried doing things like this with LLMs (DeepSeek in my case). The thing which killed the whole thing is that can’t be trusted to cut+paste code — a clang warning informed me, when a 200 line function had been moved and slightly adjusted, a == was turned into a = deep inside an if statement. I only noticed as that is a fairly standard warning compilers give.
I wouldn’t mind a system where an LLM made instructions for a second system, which was a reliable code rearranging tool.
You can't trust LLMs to copy-paste code, but you can explicitly pick what should be editable, and also review the edits in a more streamlined way.
I am actually working on a GUI for just that [0]. The first problem is solved by having explicit links above functions and classes whether to include them in the context window (with an option to remove bodies of functions, just keeping the declarations). The second one is solved by a special review mode where it auto-collapses functions/classes that were unchanged, and having an outline window that shows how many blocks were changed in each function/class/etc.
The tool is still very early in development with tons of more functionality coming (like proper deep understanding of C/C++ code structure), but the code slicing and outline-based reviewing already works just fine. Also, works with DeepSeek, or any other model that can, well, complete conversations.
It's not really that specific. There's a actually a hidden command there for comparing the current source file against an older version (otherwise, good luck testing the diff GUI without pre-recorded test cases). If anyone's interested, it can be very easily converted into a proper feature.
That said, when you review human work, the granularity is usually different. I've actually been heavily using AI to do minor refactoring like "replace these 2 variables with a struct and update all call sites" and the reviewing flow is just different. AI makes fairly predictable mistakes, and once you get the hang of it, you can spot them before you even fully read the code. Like groups of 3 edits for all call sites, and one call site with 4. Or things like removed comments or renamed variables you didn't ask to rename. Properly collapsing irrelevant parts makes much bigger difference than with human-made edits.
It's just faster and less distracting. What is a total game-changer for me, is small refactoring. Let's say, you have a method that takes a boolean argument. At some point you realize you need a third value. You could replace it with an enum, but updating a handful of call sites is boring and terribly distracting.
With LLMs I can literally type "unsavedOnly => enum Scope{Unsaved, Saved, RecentlySaved (ignore for now)}" and that's it. It will replace the "bool unsavedOnly" argument with "scope Scope", update the check inside the method, and update the callers. If had to do it by hand each time, I would have lazied out and added another bool argument, or some other kind of a sloppy fix, snowballing the technical debt. But if LLMs can do all the legwork, you don't need sloppy fixes anymore. Keeping the code nice and clean doesn't mean a huge distraction and doesn't kick you out of the zone.
Looked into it a lot. There are deterministic refactoring tools for things like convert for into foreach, or create constructor based on list of fields, but they still don't cover a lot of use cases.
I tried using a refactoring tool for reordering function arguments. The problem is, clicking through various GUI to get your point across is again too distracting. And there are still too many details. You can't say something like "new argument should be zero for callers that ignore the return value". It's not deterministic, and each case is slightly different from others. But LLMs handle this surprisingly well, and the mistakes they make are easy to spot.
What I'm really hoping to do some day is a "formal mode" where the LLM would write a mini-program to mutate the abstract syntax tree based on a textual refactoring request, thus guaranteeing determinism. But that's a whole new dimension of work, and there are numerous easier use cases to tackle before that.
Oh, it’s Wolfgang. In computational math, he has a focus on research software that few others are able to do, he (the deal.ii team more generally) got an award for it last SIAMCSE. Generally a great writer, looking forward to reading this.
1) modules only really help address time spent parsing stuff, not time spent doing codegen. Actually they can negatively impact codegen performance because they can make more definitions available for inlining/global opts, even in non-lto builds. For this reason it's likely best to compare using thin-lto in both cases.
2) when your dependencies aren't yet modularized you tend to get pretty big global module fragments, inflating both the size of your BMIs and the parsing time. Header units are supposed to partially address this but right now they are not supported in any build systems properly (except perhaps msbuild?). Also clang is pretty bad at pruning the global module fragment of unused data, which makes this worse again.
> Header units are supposed to partially address this but right now they are not supported in any build systems properly (except perhaps msbuild?).
They are supported in build2 when used with GCC (via the module mapper mechanism it offers). In fact, I would be surprised if they were supported by msbuild, provided by "properly" we mean without having to manually specify dependencies involving header units and without imposing non-standard limitations (like inability to use macros exported by header units to conditionally import other header units).
VC++ has support header units for quite some time, in fact I had to revert back to global module fragments, because CMake/clang still don't have a plan for how to support header units, and I wanted to have my demo code work in more than just VC++.
I would like to see a comparison between modules and precompiled headers. I have a suspicion that using precompiled headers could provide the same build time gains with much less work.
This has been puzzling me for over 3 decades. My first experience with C++ was Borland C++ for DOS. It had precompiled headers and it worked extremely well.
Then around 1995 I got access to HP-UX and native compiler there and GCC. Nobody heard about precompiled headers and people thought the only way to speed up compilation was to get access to computer with more CPUs and rely on make -j.
And then there was no interest to implement precompiled headers from free and proprietary vendors.
The only innovation was unity builds when one includes multiple C++ sources into super-source. But then Google killed support for it in Chromium claiming that with their build farm unity builds made things slower and supporting them in Chromium build system was unbearable burden for Google.
Precompiled headers are generally better for system/3rd-party headers. Module are better than PCHs for headers you own, although in some cases you may be better off not using them at all. (I say these because the benefit depends on the frequency with which you need to recompile them, and the relative coupling etc.) Depending on how heavy each one is in your codebase, and how often you modify global build settings, you may have a different experience. And neither is a substitute for keeping headers lightweight and decoupled.
From my experience, compile times ain't an issue if you pay a little attention. Precompiled header, thoughtful forward declarations, and not abusing templates get you a long way.
We are commonly working with games that come with a custom engine and tooling. Compiling everything from scratch (around 1M lines of modern C++ code) takes about 30-40 seconds on my desktop. Rebuilding 1 source file + linking comes in typically under 2 seconds (w/o LTO). We might get this even lower by introducing unity builds, but there's no need for that right now.
The modern cryengine compiles very fast. Their trick is that they have architected everything to go through interfaces that are on very thin headers, and thus their headers end very light and they dont compile the class properties over and over. But its a shame we need to do tricks like this for compile speed as they harm runtime performance.
Because you now need to go through virtual calls on functions that dont really need to be virtual, which means the possible cache miss from loading the virtual function from vtable, and then the impossibility of them being inlined. For example they have a ITexture interface with a function like virtual GetSize(). If it wasnt all through virtuals, that size would just be a vec2 in the class and then its a simple load that gets inlined.
At least on clang with LTO, with bitcode variant, that should be possible to devirtualize, assuming most of those interfaces only have a single implementation.
In my experience, as long as there's only a single implementation, devirtualization works well, and can even inline the functions. But you need to pass something along the lines of "-fwhole-program-vtables -fstrict-vtable-pointer" + LTO. Of course the vtable pointer is still present in the object. So I personally only use the aforementioned "thin headers" at a system level (IRenderer), rather than for each individual object (ITexture).
In addition to what everyone else has said it also makes it difficult to allocate the type on the stack. Even if you do allow it you'll at least need a probe.
We didn't create this code base ourselves, we are just working with it. I'd assume the original developers payed attention to compile times during development and introduced forward declarations whenever things got out of hand.
My computer is fast, AMD Ryzen 9 7950X, code is stored on an NVMe SSD. But there certainly are projects with fewer lines of code that take substantially longer to compile.
So, clang's modules are quite similar to clangs precompiled headers, especially the "chained" pchs. With PCH you have to wait on the serial PCH compilation step before you can get any parallelism, with modules you can compile each part of the "PCH" in parallel and anything using some subset of your dependencies can get started without waiting on things it doesn't use.
Header units are basically chained PCHs. Sadly they are hard to build correctly at the moment.
Modules are an attempt to make part of the language what currently requires a convention:
- A component is a collection of related code.
- The component has an interface and an implementation.
- The interface is a header file (e.g. *.h) that is included (but at most once!) using a preprocessor directive in each dependent component.
- The header file contains only declarations, templates, and explicitly inline definitions.
- The implementation is one or more source files (e.g. *.cpp) that provide the definitions for what is declared in the header, and other unexposed implementation details.
- Component implementations are compiled separately (usually).
- The linker finds compiled definitions for everything a component depends upon, transitively, to produce the resulting program/dll.
So much can go wrong! If only there were a notion of components in the language itself. This way we could just write what we mean ("this is a component, here is what it exports, here are the definitions, here is what it imports"). Then compiler toolchains could implement it however they like, and hopefully optimize it.
I really wonder whether LLMs are helpful in this case. This kind of task should be the forte of LLMs: well-defined syntax and requirements, abundant training material available, and outputs that are verifiable and validatable.
Perhaps we should use LLMs to convert all the legacy programs written in Fortran or COBOL into modern languages.
Thanks to author for doing some solid work in providing data points for modules. For those like me looking for the headline metric, here it is in the conclusion
So, alas, underwhelming in this iteration and perhaps speaks to 'module-fication' of existing source code (deal.II, dates from the '90s I believe), rather than doing it from scratch. More work might be needed in structuring the source code into modules as I have known good speedup with just pch, forward decls etc. (more than 10%). Good data point and rich analysis, nevertheless.It wouldn’t surprise me if they could do better if they gave up on doing most of the work programmatically.
One part of me agrees with (both from the paper)
> For example, putting a specific piece of code into the right place in each file (or adding necessary header files, as mentioned in Section 5.2) might take 20-30 seconds per file – but doing this for all 1051 files of deal.II then will take approximately a full day of (extremely boring) work. Similarly, individually annotating every class or function we want to export from a module is not feasible for a project of this size, even if from a conceptual perspective it would perhaps be the right thing to do.
and
> Given the size and scope of the library, it is clear that a whole-sale rewrite – or even just substantial modifications to each of its 652 header and 399 implementation files – is not feasible
but another part knows that spending a few days doing such ‘boring’ copy-paste work like that often has unexpected benefits; you get to know the code better and may discover better ways to organize the code.
Maybe, this project is too large for it, as checking that you didn’t mess up things by building the code and running the test suite simply takes too long, but even if it seems to be, isn’t that a good reason to try and get compile times down, so that working on the project becomes more enjoyable?
This is a great task for LLMs, honestly.
I’ve tried doing things like this with LLMs (DeepSeek in my case). The thing which killed the whole thing is that can’t be trusted to cut+paste code — a clang warning informed me, when a 200 line function had been moved and slightly adjusted, a == was turned into a = deep inside an if statement. I only noticed as that is a fairly standard warning compilers give.
I wouldn’t mind a system where an LLM made instructions for a second system, which was a reliable code rearranging tool.
You can't trust LLMs to copy-paste code, but you can explicitly pick what should be editable, and also review the edits in a more streamlined way.
I am actually working on a GUI for just that [0]. The first problem is solved by having explicit links above functions and classes whether to include them in the context window (with an option to remove bodies of functions, just keeping the declarations). The second one is solved by a special review mode where it auto-collapses functions/classes that were unchanged, and having an outline window that shows how many blocks were changed in each function/class/etc.
The tool is still very early in development with tons of more functionality coming (like proper deep understanding of C/C++ code structure), but the code slicing and outline-based reviewing already works just fine. Also, works with DeepSeek, or any other model that can, well, complete conversations.
[0] https://codevroom.com/
Why does it need to be AI specific? This would be valuable for reviewing human code changes aswell right?
It's not really that specific. There's a actually a hidden command there for comparing the current source file against an older version (otherwise, good luck testing the diff GUI without pre-recorded test cases). If anyone's interested, it can be very easily converted into a proper feature.
That said, when you review human work, the granularity is usually different. I've actually been heavily using AI to do minor refactoring like "replace these 2 variables with a struct and update all call sites" and the reviewing flow is just different. AI makes fairly predictable mistakes, and once you get the hang of it, you can spot them before you even fully read the code. Like groups of 3 edits for all call sites, and one call site with 4. Or things like removed comments or renamed variables you didn't ask to rename. Properly collapsing irrelevant parts makes much bigger difference than with human-made edits.
> review the edits
Or just do it yourself to begin with.
It's just faster and less distracting. What is a total game-changer for me, is small refactoring. Let's say, you have a method that takes a boolean argument. At some point you realize you need a third value. You could replace it with an enum, but updating a handful of call sites is boring and terribly distracting.
With LLMs I can literally type "unsavedOnly => enum Scope{Unsaved, Saved, RecentlySaved (ignore for now)}" and that's it. It will replace the "bool unsavedOnly" argument with "scope Scope", update the check inside the method, and update the callers. If had to do it by hand each time, I would have lazied out and added another bool argument, or some other kind of a sloppy fix, snowballing the technical debt. But if LLMs can do all the legwork, you don't need sloppy fixes anymore. Keeping the code nice and clean doesn't mean a huge distraction and doesn't kick you out of the zone.
This is a standard use case which is better served by a deterministic refactoring tool
Looked into it a lot. There are deterministic refactoring tools for things like convert for into foreach, or create constructor based on list of fields, but they still don't cover a lot of use cases.
I tried using a refactoring tool for reordering function arguments. The problem is, clicking through various GUI to get your point across is again too distracting. And there are still too many details. You can't say something like "new argument should be zero for callers that ignore the return value". It's not deterministic, and each case is slightly different from others. But LLMs handle this surprisingly well, and the mistakes they make are easy to spot.
What I'm really hoping to do some day is a "formal mode" where the LLM would write a mini-program to mutate the abstract syntax tree based on a textual refactoring request, thus guaranteeing determinism. But that's a whole new dimension of work, and there are numerous easier use cases to tackle before that.
This is a standard use case which, as far as I know, is not served by a deterministic refactoring tool.
Maybe the LLM should be trained to interface with text using some ed/vim dialect to only work on blocks of text.
If only they were reliable instead of a dice-throwing fest of gambling.
Oh, it’s Wolfgang. In computational math, he has a focus on research software that few others are able to do, he (the deal.ii team more generally) got an award for it last SIAMCSE. Generally a great writer, looking forward to reading this.
A few points
1) modules only really help address time spent parsing stuff, not time spent doing codegen. Actually they can negatively impact codegen performance because they can make more definitions available for inlining/global opts, even in non-lto builds. For this reason it's likely best to compare using thin-lto in both cases.
2) when your dependencies aren't yet modularized you tend to get pretty big global module fragments, inflating both the size of your BMIs and the parsing time. Header units are supposed to partially address this but right now they are not supported in any build systems properly (except perhaps msbuild?). Also clang is pretty bad at pruning the global module fragment of unused data, which makes this worse again.
> Header units are supposed to partially address this but right now they are not supported in any build systems properly (except perhaps msbuild?).
They are supported in build2 when used with GCC (via the module mapper mechanism it offers). In fact, I would be surprised if they were supported by msbuild, provided by "properly" we mean without having to manually specify dependencies involving header units and without imposing non-standard limitations (like inability to use macros exported by header units to conditionally import other header units).
VC++ has support header units for quite some time, in fact I had to revert back to global module fragments, because CMake/clang still don't have a plan for how to support header units, and I wanted to have my demo code work in more than just VC++.
I would like to see a comparison between modules and precompiled headers. I have a suspicion that using precompiled headers could provide the same build time gains with much less work.
As per Office team, modules are much faster, especially if you also make use of C++ standard library as module, available since C++23.
See VC++ devblogs and CppCon/C++Now talks from the team.
Pre-compiled headers have only worked well on Windows, and OS/2 back in the day.
For whatever reason UNIX compilers never had a great implementation of it.
With exception of clang header maps, which is anyway one of the first approaches to C++ modules.
This has been puzzling me for over 3 decades. My first experience with C++ was Borland C++ for DOS. It had precompiled headers and it worked extremely well.
Then around 1995 I got access to HP-UX and native compiler there and GCC. Nobody heard about precompiled headers and people thought the only way to speed up compilation was to get access to computer with more CPUs and rely on make -j.
And then there was no interest to implement precompiled headers from free and proprietary vendors.
The only innovation was unity builds when one includes multiple C++ sources into super-source. But then Google killed support for it in Chromium claiming that with their build farm unity builds made things slower and supporting them in Chromium build system was unbearable burden for Google.
Fwiw doing a unity build with thin-lto can yield lovely results. That way you still get parallel _and_ incremental codegen.
Do you have some examples?
I cannot find any reports of the speedups people get with the combination of Jumbo/Unity builds and ThinLTO.
Precompiled headers are generally better for system/3rd-party headers. Module are better than PCHs for headers you own, although in some cases you may be better off not using them at all. (I say these because the benefit depends on the frequency with which you need to recompile them, and the relative coupling etc.) Depending on how heavy each one is in your codebase, and how often you modify global build settings, you may have a different experience. And neither is a substitute for keeping headers lightweight and decoupled.
From my experience, compile times ain't an issue if you pay a little attention. Precompiled header, thoughtful forward declarations, and not abusing templates get you a long way.
We are commonly working with games that come with a custom engine and tooling. Compiling everything from scratch (around 1M lines of modern C++ code) takes about 30-40 seconds on my desktop. Rebuilding 1 source file + linking comes in typically under 2 seconds (w/o LTO). We might get this even lower by introducing unity builds, but there's no need for that right now.
40 seconds for 1M lines seems super fast, do you have a fast computer and/or did you spend a lot of time optimizing the compilation pipeline ?
The modern cryengine compiles very fast. Their trick is that they have architected everything to go through interfaces that are on very thin headers, and thus their headers end very light and they dont compile the class properties over and over. But its a shame we need to do tricks like this for compile speed as they harm runtime performance.
Why does it ruin runtime performance ? The code should be almost the same
Because you now need to go through virtual calls on functions that dont really need to be virtual, which means the possible cache miss from loading the virtual function from vtable, and then the impossibility of them being inlined. For example they have a ITexture interface with a function like virtual GetSize(). If it wasnt all through virtuals, that size would just be a vec2 in the class and then its a simple load that gets inlined.
At least on clang with LTO, with bitcode variant, that should be possible to devirtualize, assuming most of those interfaces only have a single implementation.
Ah yes this kind of interface ok indeed this doesn't seem like a useful layer when running the program. Maybe the compilers could optimize this though
In my experience, as long as there's only a single implementation, devirtualization works well, and can even inline the functions. But you need to pass something along the lines of "-fwhole-program-vtables -fstrict-vtable-pointer" + LTO. Of course the vtable pointer is still present in the object. So I personally only use the aforementioned "thin headers" at a system level (IRenderer), rather than for each individual object (ITexture).
They can sometimes
https://quuxplusone.github.io/blog/2021/02/15/devirtualizati...
In addition to what everyone else has said it also makes it difficult to allocate the type on the stack. Even if you do allow it you'll at least need a probe.
We didn't create this code base ourselves, we are just working with it. I'd assume the original developers payed attention to compile times during development and introduced forward declarations whenever things got out of hand.
My computer is fast, AMD Ryzen 9 7950X, code is stored on an NVMe SSD. But there certainly are projects with fewer lines of code that take substantially longer to compile.
So, clang's modules are quite similar to clangs precompiled headers, especially the "chained" pchs. With PCH you have to wait on the serial PCH compilation step before you can get any parallelism, with modules you can compile each part of the "PCH" in parallel and anything using some subset of your dependencies can get started without waiting on things it doesn't use.
Header units are basically chained PCHs. Sadly they are hard to build correctly at the moment.
To be fair, C++’s modules make no sense, just like their namespaces that span multiple translation units.
It’s just more heavy clunky abstractions for the sake of abstractions.
Modules are an attempt to make part of the language what currently requires a convention:
- A component is a collection of related code.
- The component has an interface and an implementation.
- The interface is a header file (e.g. *.h) that is included (but at most once!) using a preprocessor directive in each dependent component.
- The header file contains only declarations, templates, and explicitly inline definitions.
- The implementation is one or more source files (e.g. *.cpp) that provide the definitions for what is declared in the header, and other unexposed implementation details.
- Component implementations are compiled separately (usually).
- The linker finds compiled definitions for everything a component depends upon, transitively, to produce the resulting program/dll.
So much can go wrong! If only there were a notion of components in the language itself. This way we could just write what we mean ("this is a component, here is what it exports, here are the definitions, here is what it imports"). Then compiler toolchains could implement it however they like, and hopefully optimize it.
It makes lots of sense to anyone used to large scale software development.
It is no accident that Ada, Java, .NET, and oldies like Delphi, Eiffel, Modula-2 and Modula-3 have similar approaches.
Even the way D and Python modules and packages work, or the whole crates and modules approach in Rust.
Naturally folks not used to Web scale don't get these kind of features.
The code block styling is less than ideal.
I really wonder whether LLMs are helpful in this case. This kind of task should be the forte of LLMs: well-defined syntax and requirements, abundant training material available, and outputs that are verifiable and validatable.
Perhaps we should use LLMs to convert all the legacy programs written in Fortran or COBOL into modern languages.
You are far from the first person to have this very, very bad idea.
No, LLMs are not good at refactoring.