Sujet : Re: Computer architects leaving Intel...
De : david.brown (at) *nospam* hesbynett.no (David Brown)
Groupes : comp.archDate : 30. Aug 2024, 17:28:08
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vasruo$id3b$1@dont-email.me>
References : 1 2
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0
On 30/08/2024 17:42, John Dallman wrote:
In article <2024Aug30.161204@mips.complang.tuwien.ac.at>,
anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:
ISVs get sceptical about that, because it's generating code they
have not tested.
>
Yes, that thinking seems to be a result of C/C++ compiler
shenanigans. People advocating "optimization" based on the
assumption that undefined behaviour does not happen have
suggested that I should keep compiler versions around that
compile my source code as I expect it.
Plain old compiler bugs, introduced while fixing other ones, are quite
enough to make me assume that I'll find problems on each change of
compiler. I have had a manager in a very large software company assure me
that it was impossible for them to add bugs while making fixes. His
technical people corrected him immediately, because I'd just laughed.
I always keep old versions of compilers around, and don't change compilers (or libraries) in the middle of a project. Since I work with embedded systems, there are significantly fewer users compared to, say, x86 target compilers. Thus there is a higher risk of bugs being missed in beta testing and going unreported for longer. (IME bugs are far more likely in vendor SDK's than in gcc or newlib, but I keep everything archived just in case.) I also like to have reproducible builds - something that many Linux distributions are aiming for these days - which requires archiving the toolchain.
If you want to write reliable code that can be distributed as source and compiled by any conforming C/C++ compiler, you need to be very sure that you avoid relying on behaviour that is not specified and documented. You need to write correct code. That means if you want to copy some memory with overlapping source and destination arrays, you use "memmove" - the function for that purpose. You don't use "memcpy", since it is specified explicitly as requiring non-overlapping arrays.
If you want to write software that is "correct because it passed its tests", you can only expect it to be reliable when it is run exactly as tested. That means it must be compiled as it was during tests (same compiler, same options, same library), and arguably even run only on the same hardware (if you only test on one particular cpu, OS, etc., you can only be sure it works on that cpu, OS, etc.).
It is, of course, a lot easier to write software that appears roughly correct in the source code and passes its tests, than software that is rigidly accurate.
That's why a lot of pre-compiled commercial software gives particular versions of particular OS's or Linux distributions in their lists of requirements - even though the software would probably work fine on a much wider range.
I see nothing wrong in blaming programmers for using "memcpy" when they should have used "memmeove" - it was those programmers that made the error. And there is nothing wrong with toolchain developers wanting to give the most efficient results possible to those that code correctly, rather than punishing accurate programmers for the mistakes of less accurate programmers. But it is also important for toolchain developers to remember that programmers are all fallible humans, and sometimes they could do a better job of minimising the consequences of other people's errors, or at least informing about these issues - especially for errors that might be fairly common.