Re: "RESET"

Liste des GroupesRevenir à e design 
Sujet : Re: "RESET"
De : david.brown (at) *nospam* hesbynett.no (David Brown)
Groupes : sci.electronics.design
Date : 04. Jun 2025, 11:58:21
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <101p8sd$phe5$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0
On 30/05/2025 19:39, Joe Gwinn wrote:
On Fri, 30 May 2025 17:53:59 +0200, David Brown
<david.brown@hesbynett.no> wrote:
 
On 28/05/2025 18:07, Joe Gwinn wrote:

I recall those days.  Some managers thought that if they decreed that
no module could have a complexity (computed in various ways) exceeding
some arbitrary limit.  The problem was that real-world problems are
vastly more complex, causing atomization of the inherent complexity
into a bazillion tiny modules, hiding the structure and imposing large
added processing overheads from traversing all those inter-module
interfaces.
>
>
The problem with any generalisation and rules is that they are sometimes
inappropriate.  /Most/ functions, modules, pages of schematic diagram,
or whatever, should have a low complexity however you compute it.  But
there are always some that are exceptions, where the code is clearer
despite being "complex" according to the metrics you use.
 No, all of the complexity metrics were blown away by practical
software running on practical hardware.  Very few modules were that
simple, because too many too small modules carry large inter-module
interface overheads.
 
That changes nothing of the principles.
You aim for low and controlled complexity, at all levels, so that you can realistically test, verify, and check the code and systems at the different levels.  (Checking can be automatic, manual, human code reviews, code coverage tools, etc., - usually in combination.)  Any part with particularly high complexity is going to take more specialised testing and checking - that costs more time and money, and is higher risk.  Sometimes it is still the right choice, because alternatives are worse (such as the "too many small modules" issues you mention) or because there are clear and reliable ways to test dues to particular patterns (as you might get in a very large "dispatch" function).
You don't just throw your hands in the air and say it's better with spaghetti in a module than spaghetti between modules, and therefore you can ignore complexity!  I don't believe that is what you are actually doing, but it sounds a little like that.

 
Other tools that can be useful in testing are code coverage tools - you
can check that your test setups check all paths through the code.
>
We still do this, but the limitation is that all such tools yield far
more false alarms then valid hits, so all hits must be manually
verified.
>
>
A false alarm for a code coverage report would mean code that is not
reported as hit, but actually /is/ hit when the code is run.  How does
that come about?
 The code coverage vendors hold the details close, so we usually don't
know how hits are declared, and probably never will.
 
Do the gcc and gcov developers hold their details secret?  I'm sure there are many good reasons for picking different code coverage tools, and I'm not suggesting that gcov is in any way the "best" (for many reasons, code coverage tools would be of very limited use for most of my work).  And there are all sorts of different coverage metrics.  But it would surprise me if major vendors keep information about the prime purpose of the tool a secret.  Who would buy a coverage tool that doesn't tell you what it measures?

The one that I did manage to obtain the details turned out to be
looking for certain combinations of certain words and arrangements. It
had zero understanding of what the code did, never mind why.
 
I am not sure what kind of tool you are referring to here.  Code coverage tools track metrics about the functions, blocks and code lines that are run.  Different tools (or options) track different metrics - counts, times, or just "at least once".  They might track things at different levels.  Some are intrusive and accurate, others are non-intrusive but statistical based.  If you are wanting to use code coverage tools in combination with branch testing, you just want to know that during your test suite runs, every branch is tested at least once in each direction.

Maybe modern AI will do better, but may be too expensive to make
business sense.
 
We can pretty much guarantee that commercial vendors will add claims of AI to their tools and charge more for them.  Whether or not they will be better for it, is another matter.
I would expect AI to be more useful in the context of static error checkers, simulators, and fuzz testers rather than code coverage at run-time.

 
But it is certainly true that any kind of automatic testing or
verification is only going to get you so far - false hits or missed
cases are inevitable.
 Yes, in fact the false hits dominate by a large factor, and the main
expense in using such tools is the human effort needed to extract
those few true hits.
 
Just to be clear - are you using non-intrusive statistical code coverage tools (i.e., a background thread, timer, etc., that samples the program counter of running code?  Or are you using a tool that does instrumentation when compiling?  I'm trying to get an understanding of the kinds of "false hits" you are seeing.

 
But remember that testing cannot prove the absence of bugs - only their
presence.  And it only works on the assumption that the hardware is
correct - even when the software is perfect, you might still need that
reset button or a watchdog!
>
Absolutely.  This was true in the days of uniprocessors with one
megahertz clocks and kilobyte memories.  Now it's hundreds of
processors with multi-gigahertz clocks and terabyte physical memories.
>
>
And it's still true - modern systems give more scope for hardware issues
than simpler systems (as well as more scope for subtle software bugs).
A cosmic ray in the wrong place can render all your software
verification void.
 I must say that there was much worry about cosmic ray hits back in the
day, but they never turned out to matter in practice, except in space
systems.
I guess there are many factors for that.  If something weird happens, and you have no explanation and it never happens again, then you can easily say it was probably a cosmic ray - without any direct evidence. It is also the case that many systems or subsystems are tolerant of an occasional single-event upset - be it from cosmic rays or anything else.   If you have ECC memory, or other kinds of redundancy or error checking, rare errors there are not an issue.  So many types of memory, buses, and communication protocols are effectively immune to such things.  However, critical parts of the system will still be vulnerable to hardware glitches.  It is not without justification that safety-critical electronics often has two cores running in lockstep or other types of redundancy.

 The dominant source of errors turned out to be electrical cross-talk
and interference in the backplanes, and meta-stability in interfaces
between logic clock domains in the larger hardware system.
 
Sure.  Cosmic rays were only an example (pulled out of thin air :.) ). Glitches on power lines, unlucky coincidences on bit patterns, production flaws or ESD damage eroding electrical tolerances - there are lots of possibilities.  I'm not trying to suggest relative likelihoods here, as that will be highly variable.

I vaguely recall doing an analysis on this issue, some decades ago.
 
I recall something of the opposite - a long time ago, we had to add a variety of "safety" features to a product to fulfil a customer's safety / reliability checklist, without regard to how realistic the failure scenarios were and without spending time and money on analysis.  The result was, IMHO, lower reliability because it was more likely for the extra monitoring and checking hardware and software to fail than for the original functional stuff to fail.  Many of these extra checks were in themselves impossible to test.

Date Sujet#  Auteur
24 May 25 * "RESET"42Don Y
24 May 25 +- Re: "RESET"1Don Y
25 May 25 +- Re: "RESET"1john larkin
25 May 25 +* Re: "RESET"26Carlos E. R.
25 May 25 i+* Re: "RESET"3Don Y
25 May 25 ii`* Re: "RESET"2Carlos E. R.
25 May 25 ii `- Re: "RESET"1Don Y
27 May 25 i+* Re: "RESET"20Don Y
28 May 25 ii`* Re: "RESET"19Joe Gwinn
28 May 25 ii +- Re: "RESET"1Don Y
28 May 25 ii `* Re: "RESET"17David Brown
28 May 25 ii  `* Re: "RESET"16Joe Gwinn
30 May 25 ii   `* Re: "RESET"15David Brown
30 May 25 ii    `* Re: "RESET"14Joe Gwinn
4 Jun 25 ii     `* Re: "RESET"13David Brown
4 Jun 25 ii      +* Re: "RESET"8Joe Gwinn
4 Jun 25 ii      i`* Re: "RESET"7David Brown
4 Jun 25 ii      i `* Re: "RESET"6Joe Gwinn
5 Jun 25 ii      i  +* Re: "RESET"2David Brown
5 Jun 25 ii      i  i`- Re: "RESET"1Carlos E.R.
5 Jun 25 ii      i  `* Re: "RESET"3Martin Brown
5 Jun 25 ii      i   +- Re: "RESET"1Joe Gwinn
5 Jun 25 ii      i   `- Re: "RESET"1Don Y
6 Jun 25 ii      +* Re: "RESET"3David Brown
6 Jun 25 ii      i`* Re: "RESET"2Carlos E.R.
10 Jun 25 ii      i `- Re: "RESET"1David Brown
6 Jun 25 ii      `- Re: "RESET"1john larkin
28 May 25 i`* Re: "RESET"2Martin Brown
28 May 25 i `- Re: "RESET"1Don Y
25 May 25 +* Re: "RESET"6Ralph Mowery
25 May 25 i+* Re: "RESET"3Don Y
25 May 25 ii`* Re: "RESET"2Carlos E. R.
25 May 25 ii `- Re: "RESET"1Don Y
25 May 25 i`* Re: "RESET"2Carlos E. R.
25 May 25 i `- Re: "RESET"1Don Y
25 May 25 +* Re: "RESET"2Ian
25 May 25 i`- Re: "RESET"1Don Y
25 May 25 +* Re: "RESET"2Theo
25 May 25 i`- Re: "RESET"1Don Y
25 May 25 `* Re: "RESET"3Martin Brown
25 May 25  +- Re: "RESET"1Don Y
25 May 25  `- Re: "RESET"1Carlos E. R.

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal