Liste des Groupes | Revenir à c arch |
On 04/09/2024 14:53, jseigh wrote:On 9/4/24 06:57, David Brown wrote:On 04/09/2024 09:15, Terje Mathisen wrote:David Brown wrote:Maybe?
Rust will _always_ check for such overflow in debug builds, then when
you've determined that they don't occur, the release build falls back
standard CPU behavior, i.e. wrapping around with no panics.
But if you've determined that they do not occur (during debugging),
then your code never makes use of the results of an overflow - thus
why is it defined behaviour? It makes no sense. The only time when
you would actually see wrapping in final code is if you hadn't tested
it properly, and then you can be pretty confident that the whole thing
will end in tears when signs change unexpectedly. It would be much
more sensible to leave signed overflow undefined, and let the compiler
optimise on that basis.
You absolutely do want defined behavior on overflow.
No, you absolutely do /not/ want that - for the vast majority of use-cases.
There are times when you want wrapping behaviour, yes. More generally,
you want modulo arithmetic rather than a model of mathematical integer
arithmetic. But those cases are rare, and in C they are easily handled
using unsigned integers.
You can't use signed integers for them in C (except of course if you use
explicit modulo and none of your intermediary results overflow int),
because signed integer overflow is UB. You can't use signed integers
for the purpose in Rust either, even though it is defined behaviour in
release mode, because it is a run-time error in debug mode. (That's why
Rust's attitude here is completely daft to me.)
There are
algorithms that depend on that. Bakery algorithms for instance.
Unless you think a real life bakery with service tickets
numbering from 1 to 50 either never gets more than 50 customers
in a day or closes after their 50th customer. :)
Joe Seigh
Les messages affichés proviennent d'usenet.