Sujet : Re: Computer architects leaving Intel...
De : cr88192 (at) *nospam* gmail.com (BGB)
Groupes : comp.archDate : 08. Sep 2024, 22:34:24
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vbl596$22d2t$1@dont-email.me>
References : 1 2 3 4 5
User-Agent : Mozilla Thunderbird
On 9/8/2024 10:36 AM, Anton Ertl wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
>
There was still no easy way to determine whether your software
that calls memcpy() actually works as expected on all hardware,
>
There may not be a way to tell if memcpy()-calling code will work
on platforms one doesn't have, but there is a relatively simple
and portable way to tell if some memcpy() call crosses over into
the realm of undefined behavior.
1) At first I thought that yes, one could just check whether there is
an overlap of the memory areas. But then I remembered that you cannot
write such a check in standard C without (in the general case)
exercising undefined behaviour; and then the compiler could eliminate
the check or do something else that's unexpected. Do you have such a
check in mind that does not exercise undefined behaviour in the
general case?
2) Even if there is such a check, you have to be aware that there is a
potential problem with memcpy(). In that case the way to go is to
just use memmove(). But that does not help you with the next "clever"
idea that some compiler or library maintainer has.
In general, one can't implement the C library strictly within the confines of standard C (without invoking at least some UB). Nor can one write useful / non-trivial software.
One in effect needs an extended set of rules to be able to get much of anything useful done.
So, we usually accept de-facto things like the ability to compare unrelated pointers, ability to cast and dereference pointers with wanton disregard for declared types, etc. Because, at some level, these things are necessary.
Meanwhile:
My "get RISC-V + glibc builds working in TestKern" is being hindered mostly by things outside of my project itself...
Like, trying to get things working, I made an unfortunate action in the Ubuntu-22 WSL:
I ran "sudo apt-get upgrade"
Which broke the ability of the mainline GCC and G++ to "actually compile stuff" (and the ability of "apt-get" to do anything without breaking), making things severely limited...
I then installed "Ubuntu 24" in WSL, and managed to "speed run" this part, making it more obvious "don't run 'apt-get upgrade'...". So, had to reset the image (remove and reinstall). Normally, this command is not supposed to break the installation.
Have yet to revert the Ubuntu-22 image, as at least it still has a fully working RISC-V toolchain, even if now the main GCC and apt-get are broken (may need to reset it if I can't get everything working in Ubuntu-24).
Meanwhile, I am also ending up needing to build separate copies of things for Ubuntu-22 and Ubuntu-24 because they are seemingly not fully binary compatible with each other.
And, can't run anything much older than this, as the newer version of GCC has apparently decided that anything much older than this is too old to be able to build GCC.
Also, a lot of this sort of stuff is sort of a reason why I don't run Linux as a main OS.
...
Also, I realized while working on the "trying to get RV64 GLIBC binaries working in TestKern" thing, that some parts of the design of GLIBC are extremely brittle.
There are some large shared structures between "ld-linux.so" and "libc.so" and similar, whose contents and layouts are directly tied to configuration parameters for GLIBC and may change from one version to another. Rather than using any of the MS style strategies to add resilience in the face of version mismatch, they do the extreme opposite.
It is also almost getting more tempting to jump this experiment over to musl-libc or something... (though, the point of this was to be able to use GCC to build binaries that would run on TestKern without needing to pass a bunch of special command-line options or needing to use "--specs=..."). Well, or try to convince "configure" to use BGBCC as a cross compiler (and the likely uphill battle of trying to get userland stuff ported, which should be in theory easier if one is using GCC and GLIBC).
As for previously built binaries:
TestKern is seemingly able to load the ELF images and shared objects, but stuff still crashes fairly early on. Not yet figured out what is going on, but it does appear as if the start-up code expects some things to be passed on in the stack.
It also loads some pointers in a PC-relative way and appears to be getting bad data from these pointers (potential issue with the ELF loading or maybe something different).
Though, previously did need to fix a bug in my WAD4 VFS (I was using a WAD4 image for the shared objects), as there was a bug causing it to not read data correctly (had added some checksum checks to the underlying LZ decoding, but these didn't see anything as the error was in the VFS glue for implementing read and write operations).
...
- anton