Sujet : Re: "RESET"
De : david.brown (at) *nospam* hesbynett.no (David Brown)
Groupes : sci.electronics.designDate : 28. May 2025, 13:41:56
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <10170ak$38945$1@dont-email.me>
References : 1 2 3 4 5
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0
On 28/05/2025 01:13, Joe Gwinn wrote:
On Tue, 27 May 2025 14:13:02 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 5/25/2025 12:33 PM, Joe Gwinn wrote:
Exactly. I recall a customer wanting us to verify all possible paths
through a bit of air traffic control radar software, about 100,000
lines of plain C. Roughly one in five executable line was an IF
statement, which is 20,000 IF statements. So there are 2^20000 =
10^6020 such paths.
>
And probably 99.9% of them are superfluous.
>
[snip]
The problem is that you have no way to know which cases are
irrelevant. And practical hardware will have many things able to
retain state.
A concept you are looking for here is "cyclomatic complexity" :
<
https://en.wikipedia.org/wiki/Cyclomatic_complexity>
Conditionals are not independent (in most cases) - thus the number of paths through code is not simply two to the power of the number of if statements.
(McCabe cyclomatic complexity measures are not the only way to look at this, and like anything else, the measure has its advantages and disadvantages, and is not suitable for everything. But it's a reasonable place to start.)
The way you handle complexity in software is exactly the same as any other complexity - you break things down into manageable parts. For software, that can be libraries, modules, files, classes, functions, and blocks. You specify things from the top down, and test them from the bottom up. How much testing you do, and how you do it, is going to depend on the application - an air traffic control system will need more thorough testing than a mobile phone game.
When you are looking at a function, the cyclomatic complexity can be calculated by tools. It will then give you a good idea of how much testing you need for the function. Too high a complexity, and you will never be able to test the function to get a solid idea of its correctness, as you would need too many test cases for practicality. (You may still be able to use other analysis methods.) The complexity can also give a good indication of how easy it is for humans to understand the function and judge its correctness. (Again, no one measure gives a complete picture.)
Other tools that can be useful in testing are code coverage tools - you can check that your test setups check all paths through the code.
But remember that testing cannot prove the absence of bugs - only their presence. And it only works on the assumption that the hardware is correct - even when the software is perfect, you might still need that reset button or a watchdog!