Sujet : Re: Code guidelines
De : david.brown (at) *nospam* hesbynett.no (David Brown)
Groupes : comp.lang.cDate : 04. Sep 2024, 11:47:30
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vb9ds3$3q992$1@dont-email.me>
References : 1 2 3 4 5 6 7 8
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0
On 04/09/2024 09:22, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Before you put any check in code, think about the circumstances in
which it could fail. If there are no circumstances, it is redundant
and counter-productive.
[...]
One thing to consider is that if a check can never actually fail the
recovery code *cannot be tested* (and you can't get 100% code coverage).
p = NULL; // assume p is not volatile
if (p != NULL) {
do_something(); // can never execute this
}
Of course not all such cases are so easily detectible (
I wrote "in almost all cases, it is never tested" - but as you say, in some cases it /cannot/ ever be tested because the test conditions can never be triggered.
I think, however, that "could be tested, but is not tested" is worse. I've seen cases of code that has been put in for extra checks "just to make sure" that had not been tested, and caused more trouble.
One case I remember was some extra checks for timeouts in some communications. The new checks were unnecessary - a higher level timeout mechanism was already in place, tested, and working. The new checks were never tested, and never triggered during normal operation. But when a 32-bit millisecond counter rolled over, the check was wrong triggered - and the handling code was buggy and hung. Thus the unnecessary and untested extra check resulted in systems hanging every 49 days.