Re: Duplicate identifiers in a single namespace

Liste des GroupesRevenir à se design 
Sujet : Re: Duplicate identifiers in a single namespace
De : joegwinn (at) *nospam* comcast.net (Joe Gwinn)
Groupes : sci.electronics.design
Date : 23. Oct 2024, 16:13:59
Autres entêtes
Message-ID : <702ihjd92igs8m9ubqifnj8g42n4jmrhmo@4ax.com>
References : 1 2 3 4 5 6 7 8 9 10
User-Agent : ForteAgent/8.00.32.1272
On Sun, 20 Oct 2024 19:06:40 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 10/20/2024 1:21 PM, Joe Gwinn wrote:
>
Well, we had the developers we had, and the team was large enough that
they cannot all be superstars in any language.
>
And business targets "average performers" as the effort to hire
and retain "superstars" limits what the company can accomplish.
 
It's a little bit deeper than that.  Startups can afford to have a
large fraction of superstars (so long as they like each other) because
the need for spear carriers is minimal in that world.
 
But for industrial scale, there are lots of simpler and more boring
jobs that must also be done, thus diluting the superstars.
>
The problem is that those "masses" tend to remain at the same skill
level, indefinitely.  And, often resist efforts to learn/change/grow.
Hiring a bunch of ditch diggers is fine -- if you will ALWAYS need
to dig ditches.  But, *hoping* that some will aspire to become
*architects* is a shot in the dark...
>
War story:  I used to run an Operating System Section, and one thing
we needed to develop was hardware memory test programs for use in the
factory.  We had a hell of a lot of trouble getting this done because
our programmers point-blank refused to do such test programs.
>
That remains true of *most* "programmers".  It is not seen as
an essential part of their job.

Nor could you convince them.  They wanted to do things that were hard,
important, and interesting.  And look good on a resume.


One fine day, it occurred to me that the problem was that we were
trying to use race horses to pull plows.  So I went out to get the
human equivalent of a plow horse, one that was a tad autistic and so
would not be bored.  This worked quite well.  Fit the tool to the job.
>
I had a boss who would have approached it differently.  He would have
had one of the technicians "compromise" their prototype/development
hardware.  E.g., crack a few *individual* cores to render them ineffective
as storage media.  Then, let the prima donnas chase mysterious bugs
in *their* code.  Until they started to question the functionality of
the hardware.  "Gee, sure would be nice if we could *prove* the
hardware was at fault.  Otherwise, it *must* just be bugs in your code!"

Actually, the customers often *required* us to inject faults to prove
that our spiffy fault-detection logic actually worked.  It's a lot
harder than it looks.


I went to school in the mid 70's.  Each *course* had its own
computer system (in addition to the school-wide "computing service")
because each professor had his own slant on how he wanted to
teach his courseware.  We wrote code in Pascal, PL/1, LISP, Algol,
Fortran, SNOBOL, and a variety of "toy" languages designed to
illustrate specific concepts and OS approaches.  I can't recall
compile time ever being an issue (but, the largest classes had
fewer than 400 students)
 
I graduated in 1969, and there were no computer courses on offer near
me except Basic programming, which I took.
>
I was writing FORTRAN code (on Hollerith cards) in that time frame
(I was attending a local college, nights, while in jr high school)
Of course, it depends on the resources available to you.

I was doing Fortran on cards as well.


OTOH, I never had the opportunity to use a glass TTY until AFTER
college (prior to that, everything was hardcopy output -- DECwriters,
Trendata 1200's, etc.)

Real men used teletype machines, which required two real men to lift.
I remember them well.


Ten years later, I got a night-school masters degree in Computer
Science.
>
The target computers were pretty spare, multiple Motorola 68000
single-board computers in a VME crate or the like.  I recall that a
one megahertz instruction rate was considered really fast then.
>
Even the 645 ran at ~500KHz (!).  Yet, supported hundreds of users
doing all sorts of different tasks.  (I think the 6180 ran at
~1MHz).
 
Those were the days.  Our computers did integer arithmetic only,
because floating-point was done only in software and was dog slow.
 
And we needed multi-precision integer arithmetic for many things,
using scaled binary to handle the needed precision and dynamic range.
>
Yes.  It is still used (Q-notation) in cases where your code may not want
to rely on FPU support and/or has to run really fast.  I make extensive
use of it in my gesture recognizer where I am trying to fit sampled
points to the *best* of N predefined curves, in a handful of milliseconds
(interactive interface).

BTDT, though we fitted other curves to approximate such as trig
functions.


Much was made by the Pascal folk of the cost of software maintenance,
but on the scale of a radar, maintenance was dominated by the
hardware, and software maintenance was a roundoff error on the total
cost of ownership.  The electric bill was also larger.
>
There likely is less call for change in such an "appliance".
Devices with richer UIs tend to see more feature creep.
This was one of Wirth's pet peeves; the fact that "designers"
were just throwing features together instead of THINKING about
which were truly needed.  E.g., Oberon looks like something
out of the 1980's...
 
In the 1970s, there was no such thing as such an appliance.
>
Anything that performed a fixed task.  My first commercial product
was a microprocessor-based LORAN position plotter (mid 70's).
Now, we call them "deeply embedded" devices -- or, my preference,
"appliances".

Well, in the radar world, the signal and data processors would be
fixed-task, but they were neither small nor simple.


Nor did appliances like stoves and toasters possess a computer.
>
Key in this is figuring out how to *hide* complexity so a user
(of varying degrees of capability across a wide spectrum) can
get something to work within the constraints you've laid out.
>
Hidden complexity is still complexity, with complex failure modes
rendered incomprehensible and random-looking to those unaware of
what's going on behind the pretty facade.
>
If you can't explain the bulk of a solution "seated, having a drink",
then it is too complex.  "Complex is anything that doesn't fit in a
single brain".
 
Well, current radar systems (and all manner of commercial products)
contain many millions of lines of code.  Fitting this into a few
brains is kinda achieved using layered abstractions.
>
You can require a shitload of code to implement a simple abstraction.
E.g., the whole notion of a Virtual Machine is easily captured in
a single imagination, despite the complexity of making it happen on
a particular set of commodity hardware.

Yep.  So don't do that.  Just ride the metal.


This falls apart in the integration lab, when that which is hidden
turns on its creators.  Progress is paced by having some people who do
know how it really works, despite the abstractions, visible and
hidden.
>
Explain how the filesystem on <whatever> works, internally.  How
does it layer onto storage media?  How are "devices" hooked into it?
Abstract mechanisms like pipes?  Where does buffering come into
play?  ACLs?
 
There are people who do know these things.
>
But that special knowledge is required due to a poor choice of
abstractions.  And, trying to shoehorn new notions into old
paradigms.  Just so you could leverage an existing name
resolver for system objects that weren't part of the idea of
"files".

No.  The problem is again that those pretty abstractions always hide
at least one ugly truth, and someone has to know what's inside that
facade. to fix many otherwise intractable problems.


I, for example, dynamically create a "context" for each process.
It is unique to that process so no other process can see its
contents or access them.  The whole notion of a separate
ACL layered *atop* this is moot; if you aren't supposed to
access something, then there won't be a *name* for that
thing in the context provided to you!

Yep.  We do layers a lot, basically for portability and reusability.
But at least a few people must know the truth.


A context is then just a bag of (name, object) tuples and
a set of rules for the resolver that operates on that context.
>
So, a program that is responsible for printing paychecks would
have a context *created* for it that contained:
    Clock -- something that can be queried for the current time/date
    Log -- a place to record its actions
    Printer -- a device that can materialize the paychecks
and, some *number* of:
    Paycheck -- a description of the payee and amount
i.e., the name "Paycheck" need not be unique in this context
(why artificially force each paycheck to have a unique name
just because you want to use an archaic namespace concept to
bind an identifier/name to each?  they're ALL just "paychecks")
>
    // resolve the objects governing the process
    theClock = MyContext=>resolve("Clock")
    theLog = MyContext=>resolve("Log")
    thePrinter = MyContext=>resolve("Printer")
    theDevice = thePrinter=>FriendlyName()
>
    // process each paycheck
    while( thePaycheck = MyContext=>resolve("Paycheck") ) {
        // get the parameters of interest for this paycheck
        thePayee = thePaycheck=>payee()
        theAmount = thePaycheck=>amount()
        theTime = theClock=>now()
>
        // print the check
        thePrinter=>write("Pay to the order of "
                          , thePayee
                          , "EXACTLY "
                          , stringify(theAmount)
                          )
>
        // make a record of the transaction
        theLog=>write("Drafted a disbursement to "
                      , thePayee
                      , " in the amount of "
                      , theAmount
                      , " at "
                      , theTime
                      , " printed on "
                      , FriendlyName
                      )
>
        // discard the processed paycheck
        MyContext=>unlink(thePaycheck)
     }
>
     // no more "Paycheck"s to process

Shouldn't this be written in COBOL running on an IBM mainframe running
CICS (Customer Information Control System, a general-purpose
transaction processing subsystem for the z/OS operating system)?  This
is where the heavy lifting is done in such as payroll generation
systems.

.<https://www.ibm.com/docs/en/zos-basic-skills?topic=zos-introduction-cics>


You don't need to worry about *where* each of these objects reside,
how they are implemented, etc.  I can move them at runtime -- even
WHILE the code is executing -- if that would be a better use of
resources available at the current time!  Each of the objects
bound to "Paycheck" names could reside in "Personnel's" computer
as that would likely be able to support the number of employees
on hand (thousands?).  And, by unlinking the name from my namespace,
I've just "forgotten" how to access that particular PROCESSED
"Paycheck"; the original data remains intact (as it should
because *I* shouldn't be able to dick with it!)
>
"The Network is the Computer" -- welcome to the 1980's!  (we'll
get there, sooner or later!
>
And, if the developer happens to use a method that is not supported
on an object of that particular type, the compiler will grouse about
it.  If the process tries to use a method for which it doesn't have
permission (bound into each object reference as "capabilities"), the
OS will not pass the associated message to the referenced object
and the error handler will likely have been configured to KILL
the process (you obviously THINK you should be able to do something
that you can't -- so, you are buggy!)
>
No need to run this process as a particular UID and configure
the "files" in the portion of the file system hierarchy that
you've set aside for its use -- hoping that no one ELSE will
be able to peek into that area and harvest this information.
>
No worry that the process might go rogue and try to access
something it shouldn't -- like the "password" file -- because
it can only access the objects for which it has been *given*
names and only manipulate each of those through the capabilities
that have been bound to those handle *instances* (i.e., someone
else, obviously, has the power to create their *contents*!)
>
This is conceptually much cleaner.  And, matches the way you
would describe "printing paychecks" to another individual.

Maybe so, but conceptual clarity does not pay the rent or meet the
payroll.  Gotta get to the church on time.  Every time.


Pascal uses this exact approach.  The absence of true pointers is
crippling for hardware control, which is a big part of the reason that
C prevailed.
>
I don't eschew pointers.  Rather, if the object being referenced can
be remote, then a pointer is meaningless; what value should the pointer
have if the referenced object resides in some CPU at some address in
some address space at the end of a network cable?
 
Remote meaning accessed via a comms link or LAN is not done using RMIs
in my world - too slow and too asynchronous.  Round-trip transit delay
would kill you.  Also, not all messages need guaranteed delivery, and
it's expensive to provide that guarantee, so there need to be
distinctions.
>
Horses for courses.  "Real Time" only means that a deadline exists
for a task to complete.  It cares nothing about how "immediate" or
"often" such deadlines occur.  A deep space probe has deadlines
regarding when it must make orbital adjustment procedures
(flybys).  They may be YEARS in the future.  And, only a few
in number.  But, miss them and the "task" is botched.

Actually, that is not what "realtime" means in real-world practice.

The whole fetish about deadlines and deadline scheduling is an
academic fantasy.  The problem is that such systems are quite fragile
- if a deadline is missed, even slightly, the system collapses.  Which
is intolerable in practice, so there was always a path to handle the
occasional overrun gracefully.



I assume that RMI is Remote Module or Method Invocation.  These are
>
The latter.  Like RPC (instead of IPC) but in an OOPS context.
 
Object-Oriented stuff had its own set of problems, especially as
originally implemented.  My first encounter ended badly for the
proposed system, as it turned out that the OO overhead was so high
that the context switches between objects (tracks in this case) would
over consume the computers, leaving not enough time to complete a
horizon scan, never mind do anything useful.  But that's a story for
another day.
>
OOPS as embodied in programming languages is fraught with all
sorts of overheads that don't often apply in an implementation.
>
However, dealing with "things" that have particular "operations"
and "properties" is a convenient way to model a solution.
>
In ages past, the paycheck program would likely have been driven by a
text file with N columns:  payee, amount, date, department, etc.
The program would extract tuples, sequentially, from that and
build a paycheck before moving on to the next line.
>
Of course, if something happened mid file, you were now faced
with the problem of tracking WHERE you had progressed and
restarting from THAT spot, exactly.  In my implementation,
you just restart the program and it processes any "Paycheck"s
that haven't yet been unlinked from its namespace.

Restart?  If you are implementing a defense against incoming
supersonic missiles, you just died.  RIP.


inherently synchronous (like Ada rendezvous) and are crippling for
realtime software of any complexity - the software soon ends up
deadlocked, with everybody waiting for everybody else to do something.
>
There is nothing that inherently *requires* an RMI to be synchronous.
This is only necessary if the return value is required, *there*.
E.g., actions that likely will take a fair bit of time to execute
are often more easily implemented as asynchronous invocations
(e.g., node127=>PowerOn()).  But, these need to be few enough that the
developer can keep track of "outstanding business"; expecting every
remote interaction to be asynchronous means you end up having to catch
a wide variety of diverse replies and sort out how they correlate
with your requests (that are now "gone").  Many developers have a hard
time trying to deal with this decoupled cause-effect relationship...
especially if the result is a failure indication (How do I
recover now that I've already *finished* executing that bit of code?)
 
At the time, RMI was implemented synchronously only, and it did not
matter if a response was required, you would always stall at that call
until it completed.  Meaning that you could not respond to the random
arrival of an unrelated event.
>
For asynchronous services, I would create a separate thread just
to handle those replies as I wouldn't want my main thread having to
be "interrupted" by late arriving messages that *it* would have to
process.  The second thread could convert those messages into
flags (or other data) that the main thread could examine when it
NEEDED to know about those other activities.
>
E.g., the first invocation of the write() method on "thePrinter"
could have caused the process that *implements* that printer
to power up the printer.  While waiting for it to come on-line,
it could buffer the write() requests so that they would be ready
when the printer actually *did* come on-line.

This is how some early OO systems worked, and the time to switch
contexts between processes was quite long, so long that it was unable
to scan the horizon for incoming missiles fast enough to matter.


War story:  Some years later, in the late 1980s, I was asked to assess
an academic operating system called Alpha for possible use in realtime
applications.  It was strictly synchronous.  Turned out that if you
made a typing mistake or the like, one could not stop the stream of
error message without doing a full reboot.  There was a Control-Z
command, but it could not be processed because the OS was otherwise
occupied with an endless loop.  Oops.  End of assessment.
>
*Jensen's* Alpha distributed processing across multiple domains.
So, "signals" had to chase the thread as it executed.  I have a
similar problem and rely on killing off a resource to notify
it's consumers of its death and, thus, terminate their execution.

Hmm.  I think that Jensen's Alpha is the one in the war story.  We
were tipped off about Alpha's problem with runaway blather by one of
Jensen's competitors.


Of course, it can never be instantaneous as there are finite transit
delays to get from one node (where part of the process may be executing)
to another, etc.
>
But, my applications are intended to be run-to-completion, not
interactive.

And thus not suitable for essentially all realtime application.


When I developed the message-passing ring test, it was to flush out
systems that were synchronous at the core, regardless of marketing
bafflegab.
 
But, synchronous programming is far easier to debug as you don't
have to keep track of outstanding asynchronous requests that
might "return" at some arbitrary point in the future.  As the
device executing the method is not constrained by the realities
of the local client, there is no way to predict when it will
have a result available.
 
Well, the alternative is to use a different paradigm entirely, where
for every event type there is a dedicated responder, which takes the
appropriate course of action.  Mostly this does not involve any other
action type, but if necessary it is handled here.  Typically, the
overall architecture of this approach is a Finite State Machine.
>
But then you are constrained to having those *dedicated* agents.
What if a device goes down or is taken off line (maintenance)?
I address this by simply moving the object to another node that
has resources available to service its requests.

One can design for such things, if needed.  It's called fault
tolerance (random breakage) or damage tolerance (also known as battle
damage).  But it's done in bespoke application code. 


So, if the "Personnel" computer (above) had to go offline, I would
move all of the Paycheck objects to some other server that could
serve up "paycheck" objects.  The payroll program wouldn't be aware
of this as the handle for each "paycheck" would just resolve to
the same object but on a different server.
>
The advantage, here, is that you can draw on ALL system resources to
meet any demand instead of being constrained by the resources in
a particular "box".  E.g., my garage door opener can be tasked with
retraining the speech recognizer.  Or, controlling the HVAC!

True, but not suited for many realtime applications.


This is driven by the fact that the real world has uncorrelated
events, capable of happening in any order, so no program that requires
that event be ordered can survive.
>
You only expect the first event you await to happen before you
*expect* the second.  That, because the second may have some (opaque)
dependence on the first.
 
Or, more commonly, be statistically uncorrelated random.  Like
airplanes flying into coverage as weather patterns drift by as flocks
of geese flap on by as ...
>
That depends on the process(es) being monitored/controlled.
E.g., in a tablet press, an individual tablet can't be compressed
until its granulation has been fed into it's die (mold).
And, can't be ejected until it has been compressed.
>
So, there is an inherent order in these events, regardless of when
they *appear* to occur.
>
Sure, someone could be printing paychecks while I'm making
tablets.  But, the two processes don't interact so one cares
nothing about the other.

Yes for molding plastic, but what about the above described use cases,
where one cannot make any such assumption?


There is a benchmark for message-passing in realtime software where
there is ring of threads or processes passing message around the ring
any number of times.  This is modeled on the central structure of many
kinds of radar.
>
So, like most benchmarks, is of limited *general* use.
 
True, but what's the point?  It is general for that class of problems,
when the intent is to eliminate operating systems that cannot work for
a realtime system.
>
What *proof* do you have of that assertion?  RT systems have been built
and deployed with synchronous interfaces for decades.  Even if those
are *implied* (i.e., using a FIFO/pipe to connect two processes).

The word "realtime" is wonderfully elastic, especially as used by
marketers.

A better approach is by use cases.

Classic test case.  Ownship is being approached by some number of
cruise missiles approaching at two or three time the speed of sound.
The ship is unaware of those missiles until they emerge from the
horizon. By the way, it will take a Mach 3 missile about thirty
seconds from detection to impact.  Now what? 


There are also tests for how the thread scheduler
works, to see if one can respond immediately to an event, or must wait
until the scheduler makes a pass (waiting is forbidden).
>
Waiting is only required if the OS isn't preemptive.  Whether or
not it is "forbidden" is a function of the problem space being addressed.

Again, it's not quite that simple, as many RTOSs are not preemptive,
but they are dedicated and quite fast.  But preemptive is common these
days.


There are
many perfectly fine operating system that will flunk these tests, and
yet are widely used.  But not for realtime.
>
Again, why not?  Real time only means a deadline exists.  It says
nothing about frequency, number, nearness, etc.  If the OS is
deterministic, then its behavior can be factored into the
solution.

An OS can be deterministic, and still be unsuitable.  Many big compute
engine boxes have a scheduler that makes sweep once a second, and
their definition of RT is to sweep ten times a second.  Which is
lethal in many RT applications.  So use a more suitable OS.


I wrote a tape drive driver.  The hardware was an 8b latch that captured
the eight data tracks off the tape.  And, held them until the transport
delivered another!
>
So, every 6 microseconds, I had to capture a byte before it would be
overrun by the next byte.  This is realtime NOT because of the nearness
of the deadlines ("the next byte") but, rather, because of the
deadline itself.  If I slowed the transport down to 1% of its
normal speed, it would still be real-time -- but, the deadline would
now be at t=600us.

I've done that too.


"Hard" or "soft"?  If I *missed* the datum, it wasn't strictly
"lost"; it just meant that I had to do a "read reverse" to capture
it coming back under the head.  If I had to do this too often,
performance would likely have been deemed unacceptable.
>
OTOH, if I needed the data regardless of how long it took, then
such an approach *would* be tolerated.

How would this handle a cruise missile?  One can ask it to back up and
try again, but it's unclear that the missile is listening.


Folks always want to claim they have HRT systems -- an admission that
they haven't sorted out how to convert the problem into a SRT one
(which is considerably harder as it requires admitting that you likely
WILL miss some deadlines; then what?).
>
If an incoming projectile isn't intercepted (before it inflicts damage)
because your system has been stressed beyond its operational limits,
do you shut down the system and accept defeat?
>
If you designed with that sort of "hard" deadline in mind, you likely
are throwing runtime resources at problems that you won't be able
to address -- at the expense of the *next* problem that you possibly
COULD have, had you not been distracted.
>
The child's game of Wack-a-Mole is a great example.  The process(es)
have a release time defined by when the little bugger pokes his head up.
The *deadline* is when he decides to take cover, again.  If you can't
wack him in that time period, you have failed.
>
But, you don't *stop*!
>
And, if you have started to take action on an "appearance" that you
know you will not be able to "wack", you have hindered your
performance on the *next* appearance; better to take the loss and
prepare yourself for that *next*.
>
I.e., in HRT problems, once the deadline has passed, there is
no value to continuing to work on THAT problem.  And, if you
can anticipate that you won't meet that deadline, then aborting
all work on it ASAP leaves you with more resources to throw
at the NEXT instance.
>
But, many designers are oblivious to this and keep chasing the
current deadline -- demanding more and more resources to be
able to CLAIM they can meet those.  Amusingly, most have no way
of knowing that they have missed a deadline save for the
physical consequences ("Shit!  They just nuked Dallas!").
This because most RTOSs have no *real* concept of deadlines
so the code can't adjust its "priorities".

Yeah.  I think we are solving very different problems, so it's time to
stop.

Joe Gwinn

Date Sujet#  Auteur
29 Sep 24 * Duplicate identifiers in a single namespace24Don Y
29 Sep 24 +- Re: Duplicate identifiers in a single namespace1Bill Sloman
29 Sep 24 +* Re: Duplicate identifiers in a single namespace7Jeroen Belleman
29 Sep 24 i+* Re: Duplicate identifiers in a single namespace5Don Y
29 Sep 24 ii`* Re: Duplicate identifiers in a single namespace4Don Y
6 Oct 24 ii `* Re: Duplicate identifiers in a single namespace3Don Y
6 Oct 24 ii  `* Re: Duplicate identifiers in a single namespace2Don Y
10 Oct 24 ii   `- Re: Duplicate identifiers in a single namespace1Don Y
29 Sep 24 i`- Re: Duplicate identifiers in a single namespace1Cursitor Doom
6 Oct 24 +- Re: Duplicate identifiers in a single namespace1Don Y
16 Oct 24 `* Re: Duplicate identifiers in a single namespace14albert
16 Oct 24  +- Re: Duplicate identifiers in a single namespace1Don Y
16 Oct 24  `* Re: Duplicate identifiers in a single namespace12Joe Gwinn
16 Oct 24   `* Re: Duplicate identifiers in a single namespace11Don Y
17 Oct 24    `* Re: Duplicate identifiers in a single namespace10Joe Gwinn
17 Oct 24     `* Re: Duplicate identifiers in a single namespace9Don Y
19 Oct 24      `* Re: Duplicate identifiers in a single namespace8Joe Gwinn
20 Oct 24       `* Re: Duplicate identifiers in a single namespace7Don Y
20 Oct 24        `* Re: Duplicate identifiers in a single namespace6Joe Gwinn
21 Oct 24         `* Re: Duplicate identifiers in a single namespace5Don Y
21 Oct 24          +- Re: Duplicate identifiers in a single namespace1Don Y
23 Oct 24          `* Re: Duplicate identifiers in a single namespace3Joe Gwinn
23 Oct 24           `* Re: Duplicate identifiers in a single namespace2Don Y
23 Oct 24            `- Re: Duplicate identifiers in a single namespace1Don Y

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal