Some time ago, I gave as an example of the C++ concept of ill-formed no diagnostic required (IFNDR) the case of an inline function being compiled with different definitions by different .cpp files. The standard requires that all definitions of an inline function be identical. (There are other things that result in IFNDR, but let’s focus on the inline functions for now.) If you violate this rule, then the compiler is not required to tell you that you messed up, and the resulting program is not required to behave in any particular manner.
In practice, these problems are difficult for the compiler to diagnose because it requires matching up entities across translation units (.cpp files), and traditionally, the compiler operates only on a single translation unit, so it cannot “see” definitions from other translation units.
A common proposal for making this problem diagnosable, at least for the case of inline functions, is to defer the detection to the linker. The compiler could add a tag to all inline functions that says “must be identical”. When the linker finds two copies of the same thing, then before discarding one of them, it verifies that the two copies are identical.
This solution does come at some cost. The compiler is now required to emit code for all inline functions. Even if the compiler decided to inline them at their call site, they still have to generate an “out of line” inline version for detection purposes.
Now, generating an “out of line” version for inline functions is actually fairly normal. Functions with external linkage get code generated for them in case their address is taken by another translation unit. So compilers are doing this for many categories of functions anyway.
The problem is that byte-for-byte comparisons are fragile. The slightest perturbation could change the results. If you compile the function with different optimization levels or different compiler switches, you may get differnet results. You wouldn’t be able to link libraries compiled with different compiler vendors, or even across different versions of the same compiler, because the code generation may not match perfectly.
Requiring every instance of a function to result in identical code generation is an even stronger requirement than reproducibility. Reproducible builds (also known as determinstic builds) produce identical output from identical input: If you change no lines of code, then the result is the same. This means that decisions like choosing which register to use for a particular purpose cannot be left to external factors like the amount of memory available or a random number generator.
Requiring that every compiled version of an inline function be identical is an even stronger requirement. For example, suppose a compiler assigns each named entity an increasing sequential number in the order they are first encountered in a translation unit.
int a; // 1 int b; // 2 inline int plus_a_and_b( // 3 int v) { // 4 return v + a + b; }
The compiler might use this “identifier number” as a way to ensure reproducible outputs: When the compiler has to choose between two things, it always picks the one with a lower identifier number. If you compile the same project twice, all the identifiers will appear in the same order both times, so the decisions will always match.
But if the same inline function can appear in multiple translation units, there may be dependencies on things outside the function body.
int b; // 1 int a; // 2 inline int plus_a_and_b( // 3 int v) { // 4 return v + a + b; }
This time, the identifier numbers for a
and b
are swapped, and this might alter the code generation of plus_
because the compiler decided to use opposite registers for loading a
and b
.
; version 1 ldr r1, [a] ; load both values early ldr r2, [b] ; to shorten dependency chain add r0, r0, r1 add r0, r0, r2 ret ; version 2 ldr r2, [a] ; load both values early ldr r1, [b] ; to shorten dependency chain add r0, r0, r2 add r0, r0, r1 ret
These two implementations of the plus_
function are functionally equivalent but are not identical at the byte level. And determining whether two functions are functionally equivalent is undecidable, thanks to Rice’s theorem.
You might say, “Okay, so instead of requiring the two versions to be byte-for-byte identical, just make the compiler record the function bodies, and compare the function bodies.”
Now, you can’t literally compare the function bodies, because the meanings of things may change despite the same function body.
// Version 1
int a;
namespace Sample
{
inline long get_a() { return a; }
}
// Version 2
int a;
namespace Sample
{
long a;
inline long get_a() { return a; }
}
In the first version, get_a
returns the value of ::a
, whereas the second version returns the value of ::Sample::a
. The definitions of get_a
are identical at the source code level, but the symbol a
means something different in the two versions.
So really, the compiler would have to record the function bodies, as well as information about what each of the identifiers in the function bodies resolves to.
But wait, even if the linker verifies all this information, you can introduce a mismatch after linking has completed!
If the code is compiled into a reusable module (like a DLL on Windows or a shared library on Unix-based systems), there may be a client that was compiled with an incompatible definition of an inline function. The mismatch wasn’t detected when that client was compiled, because they used a version of the reusable module with some other definition. It’s only when the two modules (yours and theirs) meet on a running system that a mismatch occurs.
- You compile the DLL with an inline function that has some definition X.
- Client compiles their module and links to an earlier version of your DLL that has an inline function with a different definition Y.
- Client module runs and loads your DLL that uses definition X.
So now you are asking the operating system to participate in the detection: Whenever a module loads, verify that all of its types and inline entities match those of all other modules loaded into the same process. It also means that the linker cannot strip out all the information about inline functions: It needs to keep that information in the final product, so that the operating system can do runtime validation.
But you really don’t want this. It would mean that all DLLs loaded by a process must agree on the definitions of all entities, even if the entities never interact.
ONE.DLL
has an internal class calledinternal::
. This class is never shared with any other DLLs.MyObject TWO.DLL
also has an internal class calledinternal::
. This class is never shared with any other DLLs.MyObject
If ONE.DLL
and TWO.DLL
are both loaded into the same process, the validator would complain, “Mismatch: One!
and Two!
have different definitions.” Yes, they have different definitions, but they are both internal to the respective modules, so the mismatch is immaterial. You would need some sort of naming policy so that DLLs don’t accidentally have name collisions on their internal types. For example, maybe all internal names have to be placed in a unique namespace. (Good luck with languages that don’t support namespaces.)
Furthermore, you often want a mismatch. If ONE.DLL
and TWO.DLL
are both COM providers, then they will both export a function called DllÂGetÂClassÂObject
, and those functions will necessarily do different things in the different DLLs. Or maybe ONE.DLL
was built with boost version 1.84.0, but TWO.DLL
was built with boost version 1.86.0. Neither of them pass boost classes across the ABI boundary, so the fact that they each used different versions of boost is just an implementation detail and shouldn’t be a reason for declaring a mismatch.
It sounds like we’d need an annotation on each entity to indicate whether it should participate in cross-module consistency checking. Objects that cross the module boundary would be annotated as “ensure consistency”, and objects that are internal to the module would be annotated as “don’t bother”.
I’m not saying that the problem can’t be solved. But it needs to be scoped more clearly, and any solution is not going to be trivial.
Once compiled into DLL/EXE, I don’t think C++ could have a say about what happens if a process loads two modules, because it’s beyond the scope of abstraction C++ standard works on.
Consider a simple case where a C++ program and a manually-written DLL are dynamically linked, clearly C++ standard would have no say about the interaction. It’s the ABI that governs. Now suppose two DLL’s are compiled using the same compiler of the same version,...
On a side note, I recently realized the following common pattern in COM might be undefined behavior:
<code>
because `ppv` is `void **`, which cannot be used to access the `IUnknown *`. However I am not sure if this is indeed undefined, or that there might be a clause that permits it. A correct version is
<code>
Of course, I have no doubt that it works in real life (in the same module) and it definitely works when `ppv`...
Even if it t is disallowed by C++, Windows imposes additional constraints on an implementation, such as “all pointers are 64 bits wide on 64-bit systems”.