Sujet : Re: "RESET"
De : david.brown (at) *nospam* hesbynett.no (David Brown)
Groupes : sci.electronics.designDate : 05. Jun 2025, 08:42:44
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <101rhpl$1dp8g$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0
On 04/06/2025 20:54, Joe Gwinn wrote:
On Wed, 4 Jun 2025 17:53:19 +0200, David Brown
<david.brown@hesbynett.no> wrote:
On 04/06/2025 16:55, Joe Gwinn wrote:
On Wed, 4 Jun 2025 12:58:21 +0200, David Brown
<david.brown@hesbynett.no> wrote:
>
All true, but at the end of the day, complexity metrics and coverage
tools didn't come even close to paying for itself, and so they
gradually faded.
Fair enough. Even with open source tools where there is usually little or no up-front capital cost, there can still be significant costs in time and effort.
Of course that does not mean that complexity should be ignored - it just means that you deal with it in other ways, such as getting in the habit of not writing overly complex code in the first place!
>
Fair enough. I haven't done anything significant with gcov, so I can't
say how good it might be. (It is very difficult to use tools that write
data to files when you are working on small microcontrollers with no
filesystem and at most a small RTOS.)
In those cases, the development computers were far larger than the
target systems.
Indeed they are - and you have significantly different methods for developing, debugging and testing the code.
Why (AI)? I would think that a LLM could follow the thread far better than
any static checker.
>
>
I mean that I think there is more potential for adding useful AI
algorithms to static checkers and simulators than there is for using AI
algorithms in run-time code coverage tools. But that's just a guess,
not backed up by any evidence.
>
The US financial firm Morgan Stanley is using AI to analyze and
summarize nine million lines of code (in languages such as COBOL) for
re-implementation in modern languages. This from The Wall Street
Journal, 3 June 2025 issue:
>
"Morgan Stanley is now aiming artificial intelligence at one of
enterprise software's biggest pain points, and one it said Big Tech
hasn't quite nailed yet: helping rewrite old, outdated code into
modern coding languages.
>
I can see AI being a help here - just as many existing tools can be
helpful for figuring out what old code does. I am not holding my breath
waiting for AI to manage such conversions on its own.
Nor is MS - the new code is written by humans.
Although MS is trying automatic translation into modern languages, I
gather that it doesn't work all that well. In my world, there was a
lot of talk of building automatic converters to translate from one
computer language to another. It never worked because no machine
could understand the inventive ways people on tiny machines used
bespoke memory structures to improve performance.
Automatic translation between programming languages is unlikely to be successful unless the languages have very similar structures (say, Pascal to C) or you are using the output just as an intermediary language for compilation rather than as a new version of the program (like using cfront for C++ to C transcompilation). Good code written in one language is going to be structured differently from good code written in a different programming language. And these old code bases might have started as a "good code" at one time - they are very unlikely to have remained so over decades of fiddling, fixing and expanding.
War story: In the 1970s, I was involved in writing Fortran code to
implement a simulator used for training. This required 32-bit words
used as bit arrays, but the Fortran of the day had no bitwise
operation, only one bit per word logic. Which was crippling, so I
wrote an assembly-coded Fortran function that did the bitwise
operation expressions needed for this and that.
Our application programmers were intimidated by being asked to write a
little assembly, but I had set it up so they could easily do it, once
the shock wore off. It all worked.
Thirty years later, I got a phone call from the blue from someone in a
company that had won a contract to recode the simulator code in C
wondering what to do with those assembly-coded functions. He was very
relieved when I said that if the ISA library had been available, we
would have used that, and to just read the assembly source to find the
bitwise operations being performed, and write that directly in C.