Sujet : Re: "RESET"
De : david.brown (at) *nospam* hesbynett.no (David Brown)
Groupes : sci.electronics.designDate : 30. May 2025, 16:53:59
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <101ckan$i2b3$3@dont-email.me>
References : 1 2 3 4 5 6 7
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0
On 28/05/2025 18:07, Joe Gwinn wrote:
On Wed, 28 May 2025 14:41:56 +0200, David Brown
<david.brown@hesbynett.no> wrote:
On 28/05/2025 01:13, Joe Gwinn wrote:
On Tue, 27 May 2025 14:13:02 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
>
On 5/25/2025 12:33 PM, Joe Gwinn wrote:
Exactly. I recall a customer wanting us to verify all possible paths
through a bit of air traffic control radar software, about 100,000
lines of plain C. Roughly one in five executable line was an IF
statement, which is 20,000 IF statements. So there are 2^20000 =
10^6020 such paths.
>
And probably 99.9% of them are superfluous.
>
[snip]
>
The problem is that you have no way to know which cases are
irrelevant. And practical hardware will have many things able to
retain state.
>
>
A concept you are looking for here is "cyclomatic complexity" :
>
<https://en.wikipedia.org/wiki/Cyclomatic_complexity>
>
Conditionals are not independent (in most cases) - thus the number of
paths through code is not simply two to the power of the number of if
statements.
True, but it's the bounding case.
(McCabe cyclomatic complexity measures are not the only way to look at
this, and like anything else, the measure has its advantages and
disadvantages, and is not suitable for everything. But it's a
reasonable place to start.)
>
The way you handle complexity in software is exactly the same as any
other complexity - you break things down into manageable parts. For
software, that can be libraries, modules, files, classes, functions, and
blocks. You specify things from the top down, and test them from the
bottom up. How much testing you do, and how you do it, is going to
depend on the application - an air traffic control system will need more
thorough testing than a mobile phone game.
>
When you are looking at a function, the cyclomatic complexity can be
calculated by tools. It will then give you a good idea of how much
testing you need for the function. Too high a complexity, and you will
never be able to test the function to get a solid idea of its
correctness, as you would need too many test cases for practicality.
(You may still be able to use other analysis methods.) The complexity
can also give a good indication of how easy it is for humans to
understand the function and judge its correctness. (Again, no one
measure gives a complete picture.)
I recall those days. Some managers thought that if they decreed that
no module could have a complexity (computed in various ways) exceeding
some arbitrary limit. The problem was that real-world problems are
vastly more complex, causing atomization of the inherent complexity
into a bazillion tiny modules, hiding the structure and imposing large
added processing overheads from traversing all those inter-module
interfaces.
The problem with any generalisation and rules is that they are sometimes inappropriate. /Most/ functions, modules, pages of schematic diagram, or whatever, should have a low complexity however you compute it. But there are always some that are exceptions, where the code is clearer despite being "complex" according to the metrics you use.
Other tools that can be useful in testing are code coverage tools - you
can check that your test setups check all paths through the code.
We still do this, but the limitation is that all such tools yield far
more false alarms then valid hits, so all hits must be manually
verified.
A false alarm for a code coverage report would mean code that is not reported as hit, but actually /is/ hit when the code is run. How does that come about?
But it is certainly true that any kind of automatic testing or verification is only going to get you so far - false hits or missed cases are inevitable.
But remember that testing cannot prove the absence of bugs - only their
presence. And it only works on the assumption that the hardware is
correct - even when the software is perfect, you might still need that
reset button or a watchdog!
Absolutely. This was true in the days of uniprocessors with one
megahertz clocks and kilobyte memories. Now it's hundreds of
processors with multi-gigahertz clocks and terabyte physical memories.
And it's still true - modern systems give more scope for hardware issues than simpler systems (as well as more scope for subtle software bugs). A cosmic ray in the wrong place can render all your software verification void.