Sujet : Re: Computer architects leaving Intel...
De : anton (at) *nospam* mips.complang.tuwien.ac.at (Anton Ertl)
Groupes : comp.archDate : 30. Aug 2024, 15:12:04
Autres entêtes
Organisation : Institut fuer Computersprachen, Technische Universitaet Wien
Message-ID : <2024Aug30.161204@mips.complang.tuwien.ac.at>
References : 1 2
User-Agent : xrn 10.11
jgd@cix.co.uk (John Dallman) writes:
In article <vaqgtl$3526$1@dont-email.me>, cr88192@gmail.com (BGB) wrote:
The alternative is that one expects that all the software be
rebuilt for the specific configuration being used,
>
ISVs /really/ don't like that. It multiplies their testing and QA and
those are expensive. It rarely shows up problems, but convincing
themselves to do without it is hard for them.
You actually don't need different extensions for such problems, if you
have library providers like the glibc people which use different
implementations with different behaviours (in ways that resulted in
breakage) depending on the processor (not architectural extensions).
In particular, apparently around 2010 or shortly earlier, glibc
started to implement memcpy() with backwards stride on some (not all)
AMD64 hardware, and on some software this led to breakage. The cool
feature is that you could test the software on your hardware and it
would behave as expected, while on some other, hardware-level 100%
compatible hardware it would misbehave. And if the user on that
system reported the problem, you would be unable to reproduce it. I
am not sure if static linking protects against this. Containerization
does not.
Anyway, Ulrich Drepper (glibc maintainer at the time) made the usual C
undefined behaviour argument and blamed the application, which
resulted in a huge flame war. The resolution was that glibc was
modified to behave as expected for binaries linked against older
versions of glibc, but would still misbehave for binaries that are
linked against more recent glibc versions. The idea was apparently
that this avoids breakage of the existing binaries, and that new
binaries would be built from source code that avoids the problem
(probably by using memmove() instead of memcpy()).
There was still no easy way to determine whether your software that
calls memcpy() actually works as expected on all hardware, but there
is a way to avoid this particular problem if you are aware of it:
#define memcpy(dest,src,n) memmove(dest,src,n)
or recompiled from source or some other distribution format on
the local machine which it is to be run (with binaries distributed
as some form of "portable IR").
>
ISVs get sceptical about that, because it's generating code they have not
tested.
Yes, that thinking seems to be a result of C/C++ compiler shenanigans.
People advocating "optimization" based on the assumption that
undefined behaviour does not happen have suggested that I should keep
compiler versions around that compile my source code as I expect it.
Of course that does not help, because I distribute (GNU) software in
source code. And, as the glibc issue discussed earlier shows, even
testing code with a specific compiler and library version does not
necessarily help.
- anton
-- 'Anyone trying for "industrial quality" ISA should avoid undefined behavior.' Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>