Liste des Groupes | Revenir à cl c |
On 10/11/2024 07:57, Waldek Hebisch wrote:David Brown <david.brown@hesbynett.no> wrote:On 05/11/2024 20:39, Waldek Hebisch wrote:David Brown <david.brown@hesbynett.no> wrote:On 05/11/2024 13:42, Waldek Hebisch wrote:Bart <bc@freeuk.com> wrote:
Type checks can be extremely helpful, and strong typing greatly reduces
the errors in released code by catching them early (at compile time).
And temporary run-time checks are also helpful during development or
debugging.
But extra run-time checks are costly (and I don't mean just in run-time
performance, which is only an issue in a minority of situations). They
mean more code - which means more scope for errors, and more code that
must be checked and maintained. Usually this code can't be tested well
in final products - precisely because it is there to handle a situation
that never occurs.
>A function should accept all input values - once you have made clear>
what the acceptable input values can be. A "default" case is just a
short-cut for conveniently handling a wide range of valid input values -
it is never a tool for handling /invalid/ input values.
Well, default can signal error which frequently is right handling
of invalid input values.
>
Will that somehow fix the bug in the code that calls the function?
>
It can be a useful debugging and testing aid, certainly, but it does not
make the code "correct" or "safe" in any sense.
There is concept of "partial correctness": code if it finishes returns
correct value. A variation of this is: code if it finishes without
signaling error returns correct values. Such condition may be
much easier to verify than "full correctness" and in many case
is almost as useful. In particular, mathematicians are _very_
unhappy when program return incorrect results. But they are used
to programs which can not deliver results, either because of
lack or resources or because needed case was not implemented.
When dealing with math formulas there are frequently various
restrictions on parameters, like we can only divide by nonzero
quantity. By signaling error when restrictions are not
satisfied we ensure that sucessful completition means that
restrictions were satisfied. Of course that alone does not
mean that result is correct, but correctness of "general"
case is usually _much_ easier to ensure. In other words,
failing restrictions are major source of errors, and signaling
errors effectively eliminates it.
Yes, out-of-band signalling in some way is a useful way to indicate a
problem, and can allow parameter checking without losing the useful
results of a function. This is the principle behind exceptions in many
languages - then functions either return normally with correct results,
or you have a clearly abnormal situation.
In world of prefect programmers, they would check restrictions
before calling any function depending on them, or prove that
restrictions on arguments to a function imply correctness of
calls made by the function. But world is imperfect and in
real world extra runtime checks are quite useful.
Runtime checks in a function can be useful if you know the calling code
might not be perfect and the function is going to take responsibility
for identifying that situation. Programmers will often be writing both
the caller and callee code, and put temporary debugging and test checks
wherever it is most convenient.
But I think being too enthusiastic about putting checks in the wrong
place - the callee function - can hide the real problems, or make the
callee code writer less careful about getting their part of the code
correct.
Les messages affichés proviennent d'usenet.