Re: Duplicate identifiers in a single namespace

Liste des GroupesRevenir à se design 
Sujet : Re: Duplicate identifiers in a single namespace
De : joegwinn (at) *nospam* comcast.net (Joe Gwinn)
Groupes : sci.electronics.design
Date : 20. Oct 2024, 21:21:08
Autres entêtes
Message-ID : <cslahjdbii4ld914fi1lgtqqs3pd86sdpr@4ax.com>
References : 1 2 3 4 5 6 7 8
User-Agent : ForteAgent/8.00.32.1272
On Sat, 19 Oct 2024 17:15:24 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 10/19/2024 3:26 PM, Joe Gwinn wrote:
Will an average *coder* (someone who has managed to figure out how to
get a program to run and proclaimed himself a coder thereafter)
see the differences in his "product" (i.e., the code he has written)
and the blemishes/shortcomings it contains?
 
Well, we had the developers we had, and the team was large enough that
they cannot all be superstars in any language.
>
And business targets "average performers" as the effort to hire
and retain "superstars" limits what the company can accomplish.

It's a little bit deeper than that.  Startups can afford to have a
large fraction of superstars (so long as they like each other) because
the need for spear carriers is minimal in that world.

But for industrial scale, there are lots of simpler and more boring
jobs that must also be done, thus diluting the superstars.

War story:  I used to run an Operating System Section, and one thing
we needed to develop was hardware memory test programs for use in the
factory.  We had a hell of a lot of trouble getting this done because
our programmers point-blank refused to do such test programs. 

One fine day, it occurred to me that the problem was that we were
trying to use race horses to pull plows.  So I went out to get the
human equivalent of a plow horse, one that was a tad autistic and so
would not be bored.  This worked quite well.  Fit the tool to the job.


I was still scratching my head about why Pascal was so different than
C, so I looked for the original intent of the founders.  Which I found
in the Introductions in the Pascal Report and K&R C:  Pascal was
intended for teaching Computer Science students their first
programming language, while C was intended for implementing large
systems, like the Unix kernel.
>
Wirth maintained a KISS attitude in ALL of his endeavors.  He
failed to see that requiring forward declarations wasn't really
making it any simpler /for the coders/.  Compilers get written
and revised "a few times" but *used* thousands of times.  Why
favor the compiler writer over the developer?
 
Because computers were quite expensive then (circa 1982), and so
Pascal was optimized to eliminate as much of the compiler task as
possible, given that teaching languages are used to solve toy
problems's, the focus being learning to program, not to deliver
efficient working code for something industrial-scale in nature.
>
I went to school in the mid 70's.  Each *course* had its own
computer system (in addition to the school-wide "computing service")
because each professor had his own slant on how he wanted to
teach his courseware.  We wrote code in Pascal, PL/1, LISP, Algol,
Fortran, SNOBOL, and a variety of "toy" languages designed to
illustrate specific concepts and OS approaches.  I can't recall
compile time ever being an issue (but, the largest classes had
fewer than 400 students)

I graduated in 1969, and there were no computer courses on offer near
me except Basic programming, which I took.

Ten years later, I got a night-school masters degree in Computer
Science.


Prior operating systems were all written in assembly code, and so were
not portable between vendors, so Unix needed to be written in
something that could be ported, and yet was sufficient to implement a
OS kernel.  Nor can one write an OS in Pascal.
>
You can write an OS in Pascal -- but with lots of "helper functions"
that defeat the purpose of the HLL's "safety mechanisms".
 
Yes, lots.  They were generally written in assembler, and it was
estimated that about 20% of the code would have to be in assembly if
Pascal were used, based on a prior project that had done just that a
few years earlier.
>
Yes.  The same is true of eeking out the last bits of performance
from OSs written in C.  There are too many hardware oddities that
languages can't realistically address (without tying themselves
unduly to a particular architecture).
>
The target computers were pretty spare, multiple Motorola 68000
single-board computers in a VME crate or the like.  I recall that a
one megahertz instruction rate was considered really fast then.
>
Even the 645 ran at ~500KHz (!).  Yet, supported hundreds of users
doing all sorts of different tasks.  (I think the 6180 ran at
~1MHz). 

Those were the days.  Our computers did integer arithmetic only,
because floating-point was done only in software and was dog slow.

And we needed multi-precision integer arithmetic for many things,
using scaled binary to handle the needed precision and dynamic range.


  But, each of these could exploit the fact that users
don't consume all of the resources available /at any instant/
on a processor.
>
Contrast that with moving to the private sector and having
an 8b CPU hosting your development system (with dog slow
storage devices).

A realtime system can definitely consume a goodly fraction of the
computers.


Much was made by the Pascal folk of the cost of software maintenance,
but on the scale of a radar, maintenance was dominated by the
hardware, and software maintenance was a roundoff error on the total
cost of ownership.  The electric bill was also larger.
>
There likely is less call for change in such an "appliance".
Devices with richer UIs tend to see more feature creep.
This was one of Wirth's pet peeves; the fact that "designers"
were just throwing features together instead of THINKING about
which were truly needed.  E.g., Oberon looks like something
out of the 1980's...

In the 1970s, there was no such thing as such an appliance.

Nor did appliances like stoves and toasters possess a computer.


This did work - only something like 4% of Unix had to be written in
assembly, and it was simply rewritten for each new family of
computers.  (Turned out to be 6%.)
 
The conclusion was to use C:  It was designed for the implementation
of large realtime systems, while Pascal was designed as a teaching
language, and is somewhat slow and awkward for realtime systems,
forcing the use of various sidesteps, and much assembly code.  Speed
and the ability to drive hardware directly are the dominant issues
controlling that part of development cost and risk that is sensitive
to choice of implementation language.
>
One can write reliable code in C.  But, there has to be discipline
imposed (self or otherwise).  Having an awareness of the underlying
hardware goes a long way to making this adjustment.
>
I had to write a driver for a PROM Programmer in Pascal.  It was
a dreadful experience!  And, required an entirely different
mindset.  Things that you would do in C (or ASM) had incredibly
inefficient analogs in Pascal.
>
E.g., you could easily create an ASCII character for a particular
hex-digit and concatenate these to form a "byte"; then those
to form a word/address, etc.  (imagine doing that for every byte
you have to ship across to the programmer!)  In Pascal, you spent
all your time in call/return instead of actually doing any *work*!

Yes, a bullet dodged:

So the Pascal crowd fell silent, and C was chosen and successfully
used.
>
The Ada Mandate was rescinded maybe ten years later.  The ISO-OSI
mandate fell a year or so later, slain by TCP/IP.
>
I had to make a similar decision, early on.  It's really easy to get
on a soapbox and preach how it *should* be done.  But, if you expect
(and want) others to adopt and embelish your work, you have to choose
an implementation that they will accept, if not "embrace".
>
And, this without requiring scads of overhead (people and other
resources) to accomplish a particular goal.
>
Key in this is figuring out how to *hide* complexity so a user
(of varying degrees of capability across a wide spectrum) can
get something to work within the constraints you've laid out.
 
Hidden complexity is still complexity, with complex failure modes
rendered incomprehensible and random-looking to those unaware of
what's going on behind the pretty facade.
>
If you can't explain the bulk of a solution "seated, having a drink",
then it is too complex.  "Complex is anything that doesn't fit in a
single brain".

Well, current radar systems (and all manner of commercial products)
contain many millions of lines of code.  Fitting this into a few
brains is kinda achieved using layered abstractions.

This falls apart in the integration lab, when that which is hidden
turns on its creators.  Progress is paced by having some people who do
know how it really works, despite the abstractions, visible and
hidden.


Explain how the filesystem on <whatever> works, internally.  How
does it layer onto storage media?  How are "devices" hooked into it?
Abstract mechanisms like pipes?  Where does buffering come into
play?  ACLs?

There are people who do know these things.

This is "complex" because a legacy idea has been usurped to tie
all of these things together.
>
I prefer to eliminate such complexity.  And not to confuse the
programmers, or treat them like children.
>
By picking good abstractions, you don't have to do either.
But, you can't retrofit those abstractions to existing
systems.  And, too often, those systems have "precooked"
mindsets.

Yes.  Actually it's always.  And they don't know what they don't know,
the unknown unknowns.


War story from the days of Fortran, when I was the operating system
expert:  I had just these debates with the top application software
guy, who claimed that all you needed was the top-level design of the
software to debug the code.
 
He had been struggling with a mysterious bug, where the code would
[hang] soon after launch, every time.  Code inspection and path tracing
had all failed, for months.  He challenged me to figure it out.  I
figured it out in ten minutes, by using OS-level tools, which provide
access to a world completely unknown to the application software folk.
The problem was how the compiler handled subroutines referenced in one
module but not provided to the linker.  Long story, but the resulting
actual execution path was unrelated to the design of application
software, and one had to see things in assembly to understand what was
happening.
 
(This war story has been repeated in one form or another many time
over the following years.  Have kernel debugger, will travel.)
 
E.g., as I allow end users to write code (scripts), I can't
assume they understand things like operator precedence, cancellation,
etc.  *I* have to address those issues in a way that allows them
to remain ignorant and still get the results they expect/desire.
>
The same applies to other "more advanced" levels of software
development; the more minutiae that the developer has to contend with,
the less happy he will be about the experience.
>
[E.g., I modified my compiler to support a syntax of the form:
      handle=>method(arguments)
an homage to:
      pointer->member(arguments)
where "handle" is an identifier (small integer) that uniquely references
an object in the local context /that may reside on another processor/
(which means the "pointer" approach is inappropriate) so the developer
doesn't have to deal with the RMI mechanisms.]
 
Pascal uses this exact approach.  The absence of true pointers is
crippling for hardware control, which is a big part of the reason that
C prevailed.
>
I don't eschew pointers.  Rather, if the object being referenced can
be remote, then a pointer is meaningless; what value should the pointer
have if the referenced object resides in some CPU at some address in
some address space at the end of a network cable?

Remote meaning accessed via a comms link or LAN is not done using RMIs
in my world - too slow and too asynchronous.  Round-trip transit delay
would kill you.  Also, not all messages need guaranteed delivery, and
it's expensive to provide that guarantee, so there need to be
distinctions.


I assume that RMI is Remote Module or Method Invocation.  These are
>
The latter.  Like RPC (instead of IPC) but in an OOPS context.

Object-Oriented stuff had its own set of problems, especially as
originally implemented.  My first encounter ended badly for the
proposed system, as it turned out that the OO overhead was so high
that the context switches between objects (tracks in this case) would
over consume the computers, leaving not enough time to complete a
horizon scan, never mind do anything useful.  But that's a story for
another day.


inherently synchronous (like Ada rendezvous) and are crippling for
realtime software of any complexity - the software soon ends up
deadlocked, with everybody waiting for everybody else to do something.
>
There is nothing that inherently *requires* an RMI to be synchronous.
This is only necessary if the return value is required, *there*.
E.g., actions that likely will take a fair bit of time to execute
are often more easily implemented as asynchronous invocations
(e.g., node127=>PowerOn()).  But, these need to be few enough that the
developer can keep track of "outstanding business"; expecting every
remote interaction to be asynchronous means you end up having to catch
a wide variety of diverse replies and sort out how they correlate
with your requests (that are now "gone").  Many developers have a hard
time trying to deal with this decoupled cause-effect relationship...
especially if the result is a failure indication (How do I
recover now that I've already *finished* executing that bit of code?)

At the time, RMI was implemented synchronously only, and it did not
matter if a response was required, you would always stall at that call
until it completed.  Meaning that you could not respond to the random
arrival of an unrelated event.

War story:  Some years later, in the late 1980s, I was asked to assess
an academic operating system called Alpha for possible use in realtime
applications.  It was strictly synchronous.  Turned out that if you
made a typing mistake or the like, one could not stop the stream of
error message without doing a full reboot.  There was a Control-Z
command, but it could not be processed because the OS was otherwise
occupied with an endless loop.  Oops.  End of assessment.

When I developed the message-passing ring test, it was to flush out
systems that were synchronous at the core, regardless of marketing
bafflegab.


But, synchronous programming is far easier to debug as you don't
have to keep track of outstanding asynchronous requests that
might "return" at some arbitrary point in the future.  As the
device executing the method is not constrained by the realities
of the local client, there is no way to predict when it will
have a result available.

Well, the alternative is to use a different paradigm entirely, where
for every event type there is a dedicated responder, which takes the
appropriate course of action.  Mostly this does not involve any other
action type, but if necessary it is handled here.  Typically, the
overall architecture of this approach is a Finite State Machine.


This is driven by the fact that the real world has uncorrelated
events, capable of happening in any order, so no program that requires
that event be ordered can survive.
>
You only expect the first event you await to happen before you
*expect* the second.  That, because the second may have some (opaque)
dependence on the first.

Or, more commonly, be statistically uncorrelated random.  Like
airplanes flying into coverage as weather patterns drift by as flocks
of geese flap on by as ...


There is a benchmark for message-passing in realtime software where
there is ring of threads or processes passing message around the ring
any number of times.  This is modeled on the central structure of many
kinds of radar.
>
So, like most benchmarks, is of limited *general* use.

True, but what's the point?  It is general for that class of problems,
when the intent is to eliminate operating systems that cannot work for
a realtime system. There are also tests for how the thread scheduler
works, to see if one can respond immediately to an event, or must wait
until the scheduler makes a pass (waiting is forbidden). There are
many perfectly fine operating system that will flunk these tests, and
yet are widely used.  But not for realtime.

Joe Gwinn



Even one remote invocation will cause it to jam.  As
will sending a message to oneself.  Only asynchronous message passing
will work.
 
Joe Gwinn
>

Date Sujet#  Auteur
29 Sep 24 * Duplicate identifiers in a single namespace24Don Y
29 Sep 24 +- Re: Duplicate identifiers in a single namespace1Bill Sloman
29 Sep 24 +* Re: Duplicate identifiers in a single namespace7Jeroen Belleman
29 Sep 24 i+* Re: Duplicate identifiers in a single namespace5Don Y
29 Sep 24 ii`* Re: Duplicate identifiers in a single namespace4Don Y
6 Oct 24 ii `* Re: Duplicate identifiers in a single namespace3Don Y
6 Oct 24 ii  `* Re: Duplicate identifiers in a single namespace2Don Y
10 Oct 24 ii   `- Re: Duplicate identifiers in a single namespace1Don Y
29 Sep 24 i`- Re: Duplicate identifiers in a single namespace1Cursitor Doom
6 Oct 24 +- Re: Duplicate identifiers in a single namespace1Don Y
16 Oct 24 `* Re: Duplicate identifiers in a single namespace14albert
16 Oct 24  +- Re: Duplicate identifiers in a single namespace1Don Y
16 Oct 24  `* Re: Duplicate identifiers in a single namespace12Joe Gwinn
16 Oct 24   `* Re: Duplicate identifiers in a single namespace11Don Y
17 Oct 24    `* Re: Duplicate identifiers in a single namespace10Joe Gwinn
17 Oct 24     `* Re: Duplicate identifiers in a single namespace9Don Y
20 Oct 24      `* Re: Duplicate identifiers in a single namespace8Joe Gwinn
20 Oct 24       `* Re: Duplicate identifiers in a single namespace7Don Y
20 Oct 24        `* Re: Duplicate identifiers in a single namespace6Joe Gwinn
21 Oct 24         `* Re: Duplicate identifiers in a single namespace5Don Y
21 Oct 24          +- Re: Duplicate identifiers in a single namespace1Don Y
23 Oct 24          `* Re: Duplicate identifiers in a single namespace3Joe Gwinn
23 Oct 24           `* Re: Duplicate identifiers in a single namespace2Don Y
23 Oct 24            `- Re: Duplicate identifiers in a single namespace1Don Y

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal