The things is, it's not necessarily true in this case.
While C was intended to run as close to bare metal as possible, and 99% of current implementations do, that doesn't have to be the case. The C specification describes an abstract machine, and uses abstract concepts, with explicit "as-if" rules saying that implementations may, in many cases, do whatever they want, providing that conforming code runs as if it would on a naive implementation.
So there's no reason at all that, for example, pointers have to be implemented as actual bare addresses in a virtual address space which cannot be bounds checked. It should be perfectly possible to create a new implementation, with a new ABI, which defines pointers as a checked type such that all accesses through a pointer are accurately bounds checked, with a guaranteed "virtual" segfault happening whenever an invalid read or write would have occurred.
Sure, it's not ABI-compatible (by defualt) with current bare-metal C ABIs, and would require shims to work with bare-metal C libraries - but that's no different than Rust and similar other new languages. Sure, it's not quite as fast as bare-metal C implementations due to enforced bounds checking, but it's not going to be slower than Rust and similar which do the same job.
And the advantage would be, we wouldn't need to rewrite all our code. We could just recompile all the existing C code we already have to this new safe ABI, rather than having to rewrite everything from scratch in some new language!
Sure, C isn't the nicest language to use. But we already have plenty of existing code that uses it. Why don't we just write a new compiler back-end and take advantage of all that code in a safe manner?
You can't bounds check a pointer begotten from &arr[i]. It simply doesn't carry enough information. Moreover, there might be existing C programs that rely on out of bounds access (where they know that the out of bounds access falls into safe memory). So there is no way to implement a C virtual machine with bounds-checking semantics that is fully compatible with all existing C code.
Sure you can. If, for (bloated, demonstration-only) example, your opaque managed pointer type is actually this under the hood:
struct pointer {
void * real_base;
size_t size;
size_t offset;
};
then
type * p = &arr[i]
translates to
struct pointer p = { arr, sizeof(arr), i * sizeof(arr[0]) }
Any any use of "(p + j)" or "p[j]" can check that (p->offset + j sizeof(p)) is greater than or equal to 0 and less than p->size. (Excuse my confused use of types in the above.)
"there might be existing C programs that rely on out of bounds access (where they know that the out of bounds access falls into safe memory)"
Those programs are completely non-portable. They could break with your next compiler upgrade, let alone moving to a different compiler (clang?) or a different OS (BSD?).
It could happen. Witness the people who complained about their broken programs when memcopy() was sped up by taking advantage of the standard, or those who were surprised when NULL checks started being discarded after a pointer had already been dereferenced.
Even so, if you did have such programs, and were unlucky enough to rely on them, and were unable to fix them to comply with the C language spec, there's no reason you couldn't still compile them for the existing bare-metal ABI. I'm not proposing to ban* the x86-64 ABI. I'm just saying lets create an additional (x86-64-safe?) ABI that we could use to provide a safe execution environment for a subset of our existing code. That subset could range from none of it to all of it, depending on how much you personally valued speed and anti-bloat over safety, how many non-conforming programs you relied upon, and whatever other factors you wanted to take into account.
While C was intended to run as close to bare metal as possible, and 99% of current implementations do, that doesn't have to be the case. The C specification describes an abstract machine, and uses abstract concepts, with explicit "as-if" rules saying that implementations may, in many cases, do whatever they want, providing that conforming code runs as if it would on a naive implementation.
So there's no reason at all that, for example, pointers have to be implemented as actual bare addresses in a virtual address space which cannot be bounds checked. It should be perfectly possible to create a new implementation, with a new ABI, which defines pointers as a checked type such that all accesses through a pointer are accurately bounds checked, with a guaranteed "virtual" segfault happening whenever an invalid read or write would have occurred.
Sure, it's not ABI-compatible (by defualt) with current bare-metal C ABIs, and would require shims to work with bare-metal C libraries - but that's no different than Rust and similar other new languages. Sure, it's not quite as fast as bare-metal C implementations due to enforced bounds checking, but it's not going to be slower than Rust and similar which do the same job.
And the advantage would be, we wouldn't need to rewrite all our code. We could just recompile all the existing C code we already have to this new safe ABI, rather than having to rewrite everything from scratch in some new language!
Sure, C isn't the nicest language to use. But we already have plenty of existing code that uses it. Why don't we just write a new compiler back-end and take advantage of all that code in a safe manner?