Re: "RESET"

Liste des GroupesRevenir à e design 
Sujet : Re: "RESET"
De : david.brown (at) *nospam* hesbynett.no (David Brown)
Groupes : sci.electronics.design
Date : 04. Jun 2025, 16:53:19
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <101pq5f$t350$2@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0
On 04/06/2025 16:55, Joe Gwinn wrote:
On Wed, 4 Jun 2025 12:58:21 +0200, David Brown
<david.brown@hesbynett.no> wrote:
 
On 30/05/2025 19:39, Joe Gwinn wrote:
On Fri, 30 May 2025 17:53:59 +0200, David Brown
<david.brown@hesbynett.no> wrote:
>
On 28/05/2025 18:07, Joe Gwinn wrote:
>
I recall those days.  Some managers thought that if they decreed that
no module could have a complexity (computed in various ways) exceeding
some arbitrary limit.  The problem was that real-world problems are
vastly more complex, causing atomization of the inherent complexity
into a bazillion tiny modules, hiding the structure and imposing large
added processing overheads from traversing all those inter-module
interfaces.
>
>
The problem with any generalisation and rules is that they are sometimes
inappropriate.  /Most/ functions, modules, pages of schematic diagram,
or whatever, should have a low complexity however you compute it.  But
there are always some that are exceptions, where the code is clearer
despite being "complex" according to the metrics you use.
>
No, all of the complexity metrics were blown away by practical
software running on practical hardware.  Very few modules were that
simple, because too many too small modules carry large inter-module
interface overheads.
>
>
That changes nothing of the principles.
>
You aim for low and controlled complexity, at all levels, so that you
can realistically test, verify, and check the code and systems at the
different levels.  (Checking can be automatic, manual, human code
reviews, code coverage tools, etc., - usually in combination.)  Any part
with particularly high complexity is going to take more specialised
testing and checking - that costs more time and money, and is higher
risk.  Sometimes it is still the right choice, because alternatives are
worse (such as the "too many small modules" issues you mention) or
because there are clear and reliable ways to test dues to particular
patterns (as you might get in a very large "dispatch" function).
 In theory, sure.  In practice, it didn't help enough to make it
worthwhile.
 
OK.

 
You don't just throw your hands in the air and say it's better with
spaghetti in a module than spaghetti between modules, and therefore you
can ignore complexity!  I don't believe that is what you are actually
doing, but it sounds a little like that.
 Peer review of the code works better, because no pattern scanning tool
can tell spaghetti from inherent complexity.
 
That's certainly true in some cases.  It surprises me a little that your experience was so much like that, but of course experiences differ.  My experience (and I freely admit I haven't used complexity analysis tools much) is that most functions can be relatively low complexity - the inherently high complexity stuff is only a small proportion of the code.   In one situation where this was not the case, I asked the programmer to re-structure the whole thing - the code was badly designed from the start and had become an incomprehensible mess.  Peer review did not help, because the peer (me) couldn't figure out what was going on in the code.
However, it is entirely true that some code will be marked as very high complexity by tools and yet easily and simply understood by human reviewers.  If that is happening a lot in a code base, automatic tools (at least the ones you are trying) are not going to be much use.

And this goes double for operating system kernel code, which violate
essentially all of the coding standards developed for user-level
application code.
 
Different code has different needs and standards, yes.

 
Other tools that can be useful in testing are code coverage tools - you
can check that your test setups check all paths through the code.
>
We still do this, but the limitation is that all such tools yield far
more false alarms then valid hits, so all hits must be manually
verified.
>
>
A false alarm for a code coverage report would mean code that is not
reported as hit, but actually /is/ hit when the code is run.  How does
that come about?
>
The code coverage vendors hold the details close, so we usually don't
know how hits are declared, and probably never will.
>
>
Do the gcc and gcov developers hold their details secret?  I'm sure
there are many good reasons for picking different code coverage tools,
and I'm not suggesting that gcov is in any way the "best" (for many
reasons, code coverage tools would be of very limited use for most of my
work).  And there are all sorts of different coverage metrics.  But it
would surprise me if major vendors keep information about the prime
purpose of the tool a secret.  Who would buy a coverage tool that
doesn't tell you what it measures?
 I was dealing with a proprietary code coverage tool that management
was quite enamored with and so was pressuring us to use.  But we had
only a sales brochure to go from, and I point-blank refused to use it
without knowing what it did and how.  This caused a copy of the
requirements document of the scanner to appear.
 
No software tool can fix management problems :-(

I don't think gcov existed then.  We used gcc, so the software folk
would have used it were it both available and mature enough.
 
Fair enough.  I haven't done anything significant with gcov, so I can't say how good it might be.  (It is very difficult to use tools that write data to files when you are working on small microcontrollers with no filesystem and at most a small RTOS.)

Maybe modern AI will do better, but may be too expensive to make
business sense.
>
>
We can pretty much guarantee that commercial vendors will add claims of
AI to their tools and charge more for them.  Whether or not they will be
better for it, is another matter.
 Yes.  Don't forget Quantum.
We are already into post-quantum algorithms, at least in some fields!

 
I would expect AI to be more useful in the context of static error
checkers, simulators, and fuzz testers rather than code coverage at
run-time.
 Why?  I would think that a LLM could follow the thread far better than
any static checker.
 
I mean that I think there is more potential for adding useful AI algorithms to static checkers and simulators than there is for using AI algorithms in run-time code coverage tools.  But that's just a guess, not backed up by any evidence.

The US financial firm Morgan Stanley is using AI to analyze and
summarize nine million lines of code (in languages such as COBOL) for
re-implementation in modern languages.  This from The Wall Street
Journal, 3 June 2025 issue:
 "Morgan Stanley is now aiming artificial intelligence at one of
enterprise software's biggest pain points, and one it said Big Tech
hasn't quite nailed yet: helping rewrite old, outdated code into
modern coding languages.
I can see AI being a help here - just as many existing tools can be helpful for figuring out what old code does.  I am not holding my breath waiting for AI to manage such conversions on its own.

 In January, the company rolled out a tool known as DevGen.AI, built
in-house on OpenAI's GPT models. It can translate legacy code from
languages like COBOL into plain English specs that developers can then
use to rewrite it.
 So far this year it's reviewed nine million lines of code, saving
developers 280,000 hours, said Mike Pizzi, Morgan Stanley's global
head of technology and operations."
 

>
And it's still true - modern systems give more scope for hardware issues
than simpler systems (as well as more scope for subtle software bugs).
A cosmic ray in the wrong place can render all your software
verification void.
>
I must say that there was much worry about cosmic ray hits back in the
day, but they never turned out to matter in practice, except in space
systems.
>
I guess there are many factors for that.  If something weird happens,
and you have no explanation and it never happens again, then you can
easily say it was probably a cosmic ray - without any direct evidence.
It is also the case that many systems or subsystems are tolerant of an
occasional single-event upset - be it from cosmic rays or anything else.
  If you have ECC memory, or other kinds of redundancy or error
checking, rare errors there are not an issue.  So many types of memory,
buses, and communication protocols are effectively immune to such
things.  However, critical parts of the system will still be vulnerable
to hardware glitches.  It is not without justification that
safety-critical electronics often has two cores running in lockstep or
other types of redundancy.
 What happened is that semiconductor technology progressed to the point
that the amount of charge (or whatever) that distinguished symbols
became very small and thus vulnerable to random errors, for which an
error-correcting code had to be built in.  At this point, cosmic rays
were lost in the random noise, so to speak. So ECC is now inherent,
not an extra-cost bolt-on.
 
For very dense and small feature size electronics, that is mostly true - though even then there are parts that are vulnerable.  It's just that those parts are a tiny proportion of the die size, compared to memory arrays, and the like.

 
The dominant source of errors turned out to be electrical cross-talk
and interference in the backplanes, and meta-stability in interfaces
between logic clock domains in the larger hardware system.
>
>
Sure.  Cosmic rays were only an example (pulled out of thin air :.) ).
Glitches on power lines, unlucky coincidences on bit patterns,
production flaws or ESD damage eroding electrical tolerances - there are
lots of possibilities.  I'm not trying to suggest relative likelihoods
here, as that will be highly variable.
 And this is still true.
 
I vaguely recall doing an analysis on this issue, some decades ago.
>
>
I recall something of the opposite - a long time ago, we had to add a
variety of "safety" features to a product to fulfil a customer's safety
/ reliability checklist, without regard to how realistic the failure
scenarios were and without spending time and money on analysis.  The
result was, IMHO, lower reliability because it was more likely for the
extra monitoring and checking hardware and software to fail than for the
original functional stuff to fail.  Many of these extra checks were in
themselves impossible to test.
 Yes.  I recall directly testing the issue with ECC as implemented in
early DEC VAX computers, in the early 1980s.  We had a customer who
specified ECC, so we had ECC.  And soon discovered that the computer
was more reliable with ECC disabled than enabled.  That was the end of
ECC.
 
Quis custodiet ipsos custodes?
Sometimes these fault monitors and error checking systems are just kicking the can further down the road, and not actually improving anything.

Date Sujet#  Auteur
24 May 25 * "RESET"42Don Y
24 May 25 +- Re: "RESET"1Don Y
25 May 25 +- Re: "RESET"1john larkin
25 May 25 +* Re: "RESET"26Carlos E. R.
25 May 25 i+* Re: "RESET"3Don Y
25 May 25 ii`* Re: "RESET"2Carlos E. R.
25 May 25 ii `- Re: "RESET"1Don Y
27 May 25 i+* Re: "RESET"20Don Y
28 May 25 ii`* Re: "RESET"19Joe Gwinn
28 May 25 ii +- Re: "RESET"1Don Y
28 May 25 ii `* Re: "RESET"17David Brown
28 May 25 ii  `* Re: "RESET"16Joe Gwinn
30 May 25 ii   `* Re: "RESET"15David Brown
30 May 25 ii    `* Re: "RESET"14Joe Gwinn
4 Jun 25 ii     `* Re: "RESET"13David Brown
4 Jun 25 ii      +* Re: "RESET"8Joe Gwinn
4 Jun 25 ii      i`* Re: "RESET"7David Brown
4 Jun 25 ii      i `* Re: "RESET"6Joe Gwinn
5 Jun 25 ii      i  +* Re: "RESET"2David Brown
5 Jun 25 ii      i  i`- Re: "RESET"1Carlos E.R.
5 Jun 25 ii      i  `* Re: "RESET"3Martin Brown
5 Jun 25 ii      i   +- Re: "RESET"1Joe Gwinn
5 Jun 25 ii      i   `- Re: "RESET"1Don Y
6 Jun 25 ii      +* Re: "RESET"3David Brown
6 Jun 25 ii      i`* Re: "RESET"2Carlos E.R.
10 Jun 25 ii      i `- Re: "RESET"1David Brown
6 Jun 25 ii      `- Re: "RESET"1john larkin
28 May 25 i`* Re: "RESET"2Martin Brown
28 May 25 i `- Re: "RESET"1Don Y
25 May 25 +* Re: "RESET"6Ralph Mowery
25 May 25 i+* Re: "RESET"3Don Y
25 May 25 ii`* Re: "RESET"2Carlos E. R.
25 May 25 ii `- Re: "RESET"1Don Y
25 May 25 i`* Re: "RESET"2Carlos E. R.
25 May 25 i `- Re: "RESET"1Don Y
25 May 25 +* Re: "RESET"2Ian
25 May 25 i`- Re: "RESET"1Don Y
25 May 25 +* Re: "RESET"2Theo
25 May 25 i`- Re: "RESET"1Don Y
25 May 25 `* Re: "RESET"3Martin Brown
25 May 25  +- Re: "RESET"1Don Y
25 May 25  `- Re: "RESET"1Carlos E. R.

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal