Sujet : Re: Memory protection between compilation units?
De : antispam (at) *nospam* fricas.org (Waldek Hebisch)
Groupes : comp.lang.cDate : 15. Jun 2025, 14:57:59
Autres entêtes
Organisation : To protect and to server
Message-ID : <102mjh5$31ckr$1@paganini.bofh.team>
References : 1 2 3 4 5 6 7
User-Agent : tin/2.6.2-20221225 ("Pittyvaich") (Linux/6.1.0-9-amd64 (x86_64))
Mateusz Viste <
mateusz@not.gonna.tell> wrote:
On 13.06.2025 15:56, Michael S wrote:
A significant part of x86 installed base (all Intel Core CPUs starting
from gen 6 up to gen 9 and their Xeon contemporaries) has extension
named Itel MPX that was invented exactly for that purpose. But it didn't
work particularly well. Compiler people never liked it, but despite
that it was supported by several generations of gcc and probably by
clang as well.
This does not really sound like something "readily available", unless you
are suggesting that I migrate to a Linux kernel from 10 years ago, switch
to gcc 5.0 and use outdated hardware.
The proper solution to your problem is to stop using memory-unsafe
language for complex application programming. It's not that successful
use of unsafe languages is for complex application programming is
impossible. The practice proved many times that it can be done. But
only by very good team. You team is not good enough.
Just to clarify: I didn’t post here seeking help with a simple out-of-bounds
issue, nor was I here to vent. I’ve been wrangling C code in complex,
high-performance systems for over a decade - I’m managing just fine. Code
improvement is a continual, non-negotiable process in our line of work, but
fires happen occasionally nonetheless. While fixing the issue, I started
wondering about how faults like this could be located faster, that is
assuming they do slip into production - because in spite of the testing
process, some faults will inevitably get to customers.
A crash that happens closer to the source of the problem (same compilation
unit) would significantly ease the debugging effort. I figured it was a
topic worth sharing, in the spirit of sparking some constructive
discussions.
You should understand that C array indexing and pointer pointer
operations are defined in specific way. This has several
advantages. But also has significant cost: checking validity
of array indexing in C is much harder than in other languages.
Namely, in most languages implementation knows size/bounds of
an array and can automatically generate checks on each access.
This has some cost, but modern experience is that this cost
is quite acceptable (on average about 5-10% increase in runtime
and similar increase in size). In C compiler sometimes knows
size of the array, but in general it does not. So in C you
either use half measures, like hoping that paging hardware
will catch of of bound access (possibly arranging data layout to
increase chance of fault) or very expensive approches,
which essentially bundle bounds with the pointer (Intel
tried to add hardware support for this, but even with
hardware support it is still much more expensive than checking
in some other languages).
IIUC in your example the array was global, so compiler knew its
bound and in principle could generate bounds checks. But
I am not aware of C compiler which actually generate such
checks. AFAIK gcc sanitize options are doing somewhat different
thing, Tiny C has an option to generate bounds checks, but
it is not clear to me in which cases it is effective (and you
probably would not use Tiny C for preformance critical code).
Note that in C++ when you use C arrays, you have the same
situation as in C. But you can instead use array classes which
check accesses.
-- Waldek Hebisch