orlp 2 days ago

Every developer I've talked to has had the same experience with compilation caches as me: they're great. Until one day you waste a couple hours of your time chasing a bug caused by a stale cache. From that point on your trust is shattered, and there's always a little voice in the back of your head when debugging something which says "could this be caused by a stale cache?". And you turn it off again for peace of mind.

meisel 5 hours ago

Very interesting stuff. However, for my day-to-day work, I'm in a large C++ code base where most of the code has to be in headers due to templating. The bottlenecks are, very roughly:

- Header parsing (40% of time)

- Template instantiation (40% of time)

- Backend (20% of time)

For my use case, it seems like this cache would only kick in when 80% of the work has already been done. Ccache, on the other hand, doesn't require any of that work to be done. On a sidenote, template instantiation caching is a very interesting strategy, but today's compilers don't use it (there was some commercially sold compiler a while back that did have it, though).

  • aengelke 3 hours ago

    Template instantiation caching is likely to help -- in an unoptimized LLVM build, I found that 40-50% of the compiled code at object file level is discarded at link-time as redundant.

    Another thing I'd consider as interesting is parse caching from token to AST. Most headers don't change, so even when a TU needs to be recompiled, most parts of the AST could be reused. (Some kind of more clever and transparent precompiled headers.) This is likely to need some changes in the AST data structures for fast serialization and loading/inserting. And that makes me think that maybe the text book approach of generating an AST is a bad idea if we care about fast compilation.

    Tangentially, I'm astonished that they claim correctness while a large amount of IR is inadequately (if at all) captured in the hash (comdat, symbol visibility, aliases, constant exprs, block address, calling convention/attributes for indirect calls, phi nodes, fast math flags, GEP type, ....). I'm also a bit annoyed, because this is the type of research that is very sloppily implemented, only evaluates projects where compile time is not a big problem and then only achieves small absolute savings, and papers over inherent difficulties (here: capturing the IR, parse time) that makes this unlikely to be used in practice.

    • meisel 2 hours ago

      I knew that name looked familiar, I thought about mentioning tpde here :)

      That's interesting to hear that IR is missing a lot. I'm also surprised that it could provide much gain over hashing the preprocessed output - maybe my workflow is different from others, but typically a change to the preprocessed output implies a change to the IR (e.g., it's a functional change and not just a variable name change or something). Otherwise, why would I recompile it?

      Parse caching does sound interesting. Also, a lot of stuff that makes its way into the preprocessed output doesn't end up getting used (perhaps related to the 40-50% figure you gave). Lazy parsing could be helpful - just search for structural chars, to determine entity start/stop ranges, and add the names to a set, then do parsing lazily

      • aengelke 33 minutes ago

        > but typically a change to the preprocessed output implies a change to the IR (e.g., it's a functional change and not just a variable name change or something). Otherwise, why would I recompile it?

        For C++, this could happen more often, e.g. when changing the implementation of an inline function or a non-instantiated template in a header that is not used in the compilation unit.