on C++
Kinda? Maybe? It’s complicated.
Every 6 to 12 months, I try to use C++ modules, run into a hurdle, maybe rant about it on social media, then move on to something else. Despite watching multiple talks on the topic, there’s always something that gets in the way. My biggest success so far has been managing to use the VulkanHpp module in my renderer library, after which things started breaking down. But after making some progress again last week (and running into new hurdles), I feel like I have enough to make a proper summary.
As a disclaimer, I’d like to mention that I have shared some of my conclusions on the state of modules with fellow C++ programmers and they didn’t all agree with my conclusions. However, I believe that modules suffer from a strong “expert bias” problem that makes a lot of counterpoints read like “on my machine it works” to people like me who haven’t had a lot of exposure with and didn’t follow the standardization closely. I do not presume to be a subject matter expert on the topic, but I know build systems and I believe I have spent much more time trying to fiddle with modules on my projects that the average C++ programmer, so I think this piece can speak for the average enthusiast user (or would-be user, more like).
Oh and I mostly focus on MSVC. I might throw a quick mention of Clang or GCC but my experience is mostly on Windows.
The easy parts
Contrary to what you may have heard, the simple use cases are fairly easy to make work, providing you stay within a strict set of limitations. For example, as I mentioned before I used the module provided by VulkanHpp in my rendering library and it works just fine. Or more precisely, it used to work, until they changed something upstream that ran into the set of limitations I alluded to. We’ll get back to the details later. In the meantime, here’s what it looks like in my CMake:
add_library( VulkanHppModule )
target_sources( VulkanHppModule PRIVATE
FILE_SET CXX_MODULES
BASE_DIRS ${Vulkan_INCLUDE_DIR}
FILES ${Vulkan_INCLUDE_DIR}/vulkan/vulkan.cppm
)
target_compile_definitions( VulkanHppModule PUBLIC
VULKAN_HPP_NO_SETTERS
VULKAN_HPP_NO_CONSTRUCTORS
)
target_link_libraries( VulkanHppModule PUBLIC Vulkan::Vulkan )
I didn’t even have to come up with those lines myself, they were given by the project’s documentation. All I really needed to customize was the compile definitions if needed (in this case I disabled setters and constructors to instead rely on C++ 20 designated initializers).
And there it worked, I could just do import vulkan_hpp in my renderer library and use Vulkan’s C++ bindings. Hadn’t I managed to make it work, I would probably have gone back to Vulkan’s C API with my own custom RAII wrappers, because the compile times with standard #include were atrocious. This also worked recursively (again with limitations to be explained later), meaning my renderer library could have the import of VulkanHpp in its public headers and it would pass on just fine when included in my projects that do #include <renderer/renderer.h>.
You may have read that CMake takes a bit of hacking to make modules work, that you have to use esoteric flags such as CMAKE_CXX_SCAN_FOR_MODULES, CMAKE_EXPERIMENTAL_CXX_MODULE_DYNDEP or CMAKE_EXPERIMENTAL_CXX_MODULE_CMAKE_API but none of those are needed at the moment, provided that you use a recent version of CMake (ideally 4.x but the defaults should be on starting 3.28).
So there it was, with little work I had replaced the agonizing 9 seconds it took to include VulkanHpp into a negligible amount of milliseconds. I consider this a solid win. Now comes the trouble…
IntelliNonSense
So here’s a fun fact for you: you can find meeting minutes from SG15 dating from 2019 where Microsoft claims that they have modules working just fine internally for the Edge team. And yet if you open a project that uses modules with Visual Studio 2026 you get greeted with this amazing message:
C++ IntelliSense support for C++20 Modules is currently experimental.
Yup. It’s been 7 years since and they still can’t get IntelliSense to properly parse import directives. I know that the language server is based on EDG and not VC++, but frankly I don’t care. This is a company worth almost 3 trillions dollars at the time of writing telling us that they can’t make a feature work a decade after they pushed for modules to be standardized based on their in-house success story. I don’t know if they exaggerated their claims at the time, or if they didn’t properly fund the Visual Studio team since or what, but you can’t tell me 8 years wasn’t enough to make syntax highlighting work with modules. And if it is, then maybe there was something deeply wrong in their proposal and the committee should have asked to see the receipts before voting yes.
Anyways, here’s how you solve it:
#if defined( __INTELLISENSE__ )
#include <vulkan/vulkan.hpp>
#include <vulkan/vulkan_raii.hpp>
#else
import vulkan_hpp;
#endif
That keeps your compiler (and iteration time) on the module fast path, and then IntelliSense can chug along parsing header files in the background so you get highlighting and autocompletion. Is it a hack? Absolutely. But it’s a hack I’ve been using for 6 months that allows me to focus on something else.
And with that out of the way, we can talk about the real problem.
Modules are viral all-or-nothing
I have hinted in previous sections that modules work if you stick to some strict limitations. Trouble is, those aren’t small limitations. Mainly, modules are kind of an all or nothing situation. If you start using a libraries through import directives, you can’t have the same translation unit pull it through #includes. And that quickly becomes a problem.
Here’s the simplest example that explains it:
// Works, obviously
#include <array>
// Works even if <array> is included before and part of the std module
import std;
// Error, will yield a million "xxx already declared" failures
#include <utility>
Simply put, a library can both be imported and included as long #include comes first and import comes second. I’m still not sure if this is mandated by the standard or an implementation limitation, but it’s something I’ve observed directly on MSVC and heard mentioned by others too.
In my previous use case this was fine, because VulkanHpp is only imported by my renderer library, doesn’t import anything itself, and isn’t used anywhere else in my build tree. Sadly, things took a turn for the worst when the recent release started pulling the standard library by doing import std. Because suddenly, there’s a transitive dependency that imports a very common library, so now I have to make sure my import vulkan_hpp directive comes after any other #include of the standard library. And since vulkan_hpp is used publicly in my renderer library, now my renderer library also need to always be imported last in every translation unit. Else I get a billion redeclaration/redefinition compile errors.
“Just move to modules”
The preferred solution, I’m told, is to move everything to modules. Or at least, if one library starts doing import std, patch every other library I use to only do import std. In the case of my toy project, that would mean at least TBB and fastgltf. Ironically, it doesn’t seem to impact C++ libraries that only rely the C standard library (I believe it would if I did import std.compat?). It’s a sad affair that this vindicates library authors who refuse to use the STL.
Note that I said patch, not just flip a switch. Because despite C++20 being 6 years old, barely any C++ libraries comes with a module definition. Boost only offer modules for a few select libraries. The claims I read of Catch2 providing a module seem to only have been AI hallucination. The only big one I could find is fmt, which is a nice library but honestly if you have C++20 support you already have <format> available anyway.
And of course, each library that decides to support modules needs to provide some form of dual build because not all their clients use modules yet. And for each of their own dependency, they need to decide if they pull them through #include, import or let the user configure it (my current opinion is that the module version should always use import and not provide a switch to avoid combinatorial hell).
Supporting dual-build
Next I’ve tried supporting dual build for my renderer lib and it’s not entirely a trivial affair.
First, as suggested before you need to toggle includes to imports when building/parsing in module mode. That usually means adding a define and doing a little dance around each #include directive:
#ifndef RENDERER_MODULE
#include <array>
#include <utility>
#include <vector>
#else
import std;
#endif
For libraries that are one single header-only implementation this isn’t the worst, but for more complex libraries made of multiple .cpp and .h files it becomes a bit more of an easter-egg hunt. In my current POC branch I ended up ripping all #include directives out and putting them all in one file that I can toggle on/off between the module and the non-module path. This makes the build slower without modules, because now all my translation units are pulling a bunch of headers that they don’t personally need it (looking at you, <filesystem> 😠).
Then, we have to handle the fact that module directives cannot be #ifdef‘d out. By design. I’m not certain why that is, but it is a hard error as per the standard. Which means if you have a .cpp implementation file, you cannot use #ifdef and friends to conditionally declare it as a part of a module. That leaves three options: a hack, another hack or always building your library as a module.
Let’s start with the first hack. I don’t like it, but it kind of shows the futility of trying to restrict #ifdef in the spec. Because that restriction doesn’t apply to #include. So we can just bypass it by duplicating every implementation file:
// device_module.cpp
module renderer;
#define RENDERER_MODULE
#include <device.cpp>
This implies using a different set of .cpp file whether you build as a module or not, and having an extra glue file for every implementation file, but it works. Alternatively, a suggestion by Daniela Engert was to entirely discard the separate compilation of all the .cpp files and instead pull them all in the module :private; section of the module definition with #include directives:
export module renderer;
export {
#include <renderer/renderer.h>
}
module :private;
#include <renderer/bindless.cpp>
#include <renderer/buffer.cpp>
#include <renderer/command_buffer.cpp>
#include <renderer/device.cpp>
// ...
Some of my readers may object “but that would put all implementation in the same translation unit, like unity builds”. That would be correct. Which is why I would rather not use that solution either. I have had to deal with unity builds in the past and still consider them a hack that breaks the traditional expectation around static and namespace {}.
Almost Always Modules?
Instead, I’ve opted to always build my library as a module. That way, I can put module declarations in my .cpp files without issues. The trick is to use C++20’s extern "C++". In the same way that names declared with extern "C" will use backward compatible C linkage and name mangling, wrapping export {} declarations with extern "C++" generates symbols using an ABI compatible with #include declarations (the default with modules is to decorate every symbol with its module name, which makes it impossible to find by the linker in non-module contexts).
export module renderer;
// Don't mangle as a module for backward compatibility with non modules includes
extern "C++"
{
export {
#include "renderer/renderer.h"
}
}
That way, the library doesn’t need to build differently for consumers using import vs #include. This is obviously only an issue for libraries that produce exported symbols. Header-only libraries do not need to bother with it.
Having only one build means the library doesn’t exercise it’s own #include variant anymore. You are advised to keep a few test around that use the library both through the import and the #include path for as long as you support both (which I suspect is gonna be a while given module’s adoption rate).
So, should I use modules?
There’s a big upfront cost to switch to modules. Having to switch all your dependencies to modules is some amount of work and there’s sadly little support from library maintainers at the moment. Even the people who report using modules seem to be using forks of their third party libraries at the moment. I do not know if they didn’t feel like contributing/maintaining patches, or if they submitted patches that got rejected, but this isn’t very encouraging. Polls from Meeting C++ do not show a high adoption rate for a 6 year old feature. It might be a chicken and egg problem (no one switches to modules due to lack of library support, library maintainers don’t bother due to lack of modules users).
I am considering contributing patches for the libraries I use, but I admit even after writing this article I still feel a bit of an imposter syndrome and wonder if my contribution would be any good. There’s so little expertise, experience and literature around modules out there that it’s not obvious what is and isn’t a practice. I’ve figured the point of the new keywords mostly by trial and error, which makes me suspect most project won’t have a qualified reviewer to see if a proposed patch is good.
In the meantime, the easy way out is to do like I did initially with VulkanHpp and keep module usages to libraries that are heavy to parse but easy to keep last in the #include/import path for a quick win, but sadly it breaks down quickly at scale due to the viral factor.
Addendum: Jens Weller mentioned to me the existence of Are We Modules Yet?, a website that lists which projects provide modules. Funny enough fastgltf provides a module, it’s just not built or installed by vckpg which means I didn’t see it. I think libraries should always add module definitions to their install list rather that put it behind a build setting so it doesn’t become a package manager problem.