Re: Duplicate identifiers in a single namespace

Liste des GroupesRevenir à se design 
Sujet : Re: Duplicate identifiers in a single namespace
De : blockedofcourse (at) *nospam* foo.invalid (Don Y)
Groupes : sci.electronics.design
Date : 23. Oct 2024, 19:48:30
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vfbge4$25hnp$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11
User-Agent : Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.2
On 10/23/2024 8:13 AM, Joe Gwinn wrote:
War story:  I used to run an Operating System Section, and one thing
we needed to develop was hardware memory test programs for use in the
factory.  We had a hell of a lot of trouble getting this done because
our programmers point-blank refused to do such test programs.
>
That remains true of *most* "programmers".  It is not seen as
an essential part of their job.
 Nor could you convince them.  They wanted to do things that were hard,
important, and interesting.  And look good on a resume.
One has to remember who the *boss* is in any relationship.

One fine day, it occurred to me that the problem was that we were
trying to use race horses to pull plows.  So I went out to get the
human equivalent of a plow horse, one that was a tad autistic and so
would not be bored.  This worked quite well.  Fit the tool to the job.
>
I had a boss who would have approached it differently.  He would have
had one of the technicians "compromise" their prototype/development
hardware.  E.g., crack a few *individual* cores to render them ineffective
as storage media.  Then, let the prima donnas chase mysterious bugs
in *their* code.  Until they started to question the functionality of
the hardware.  "Gee, sure would be nice if we could *prove* the
hardware was at fault.  Otherwise, it *must* just be bugs in your code!"
 Actually, the customers often *required* us to inject faults to prove
that our spiffy fault-detection logic actually worked.  It's a lot
harder than it looks.
Yup.  I have to validate my DRAM test routines.  How do you cause a DRAM
to experience a SEU?  Hard fault?  Transient fault?  Multiple bit
faults (at same address or different addresses in a shared row
or column)?
And:  you replace the DRAM with something that mimics a DRAM that you
can *command* to fail in specific ways at specific times and verify that
your code actually sees those failures.
What's harder is injecting faults *inside* MCUs where, increasingly,
more and more critical resources reside.

OTOH, I never had the opportunity to use a glass TTY until AFTER
college (prior to that, everything was hardcopy output -- DECwriters,
Trendata 1200's, etc.)
 Real men used teletype machines, which required two real men to lift.
I remember them well.
I have one in the garage.  Hard to bring myself to part with it...
"nostalgia".  (OTOH, I've rid myself of the mag tape transports...)

And we needed multi-precision integer arithmetic for many things,
using scaled binary to handle the needed precision and dynamic range.
>
Yes.  It is still used (Q-notation) in cases where your code may not want
to rely on FPU support and/or has to run really fast.  I make extensive
use of it in my gesture recognizer where I am trying to fit sampled
points to the *best* of N predefined curves, in a handful of milliseconds
(interactive interface).
 BTDT, though we fitted other curves to approximate such as trig
functions.
Likely the data points didn't change every second as they do with
a user trying to signal a particular gesture in an interactive environment.

In the 1970s, there was no such thing as such an appliance.
>
Anything that performed a fixed task.  My first commercial product
was a microprocessor-based LORAN position plotter (mid 70's).
Now, we call them "deeply embedded" devices -- or, my preference,
"appliances".
 Well, in the radar world, the signal and data processors would be
fixed-task, but they were neither small nor simple.
Neither particular size nor complexity are required.  Rather, that
the device isn't "general purpose".

A context is then just a bag of (name, object) tuples and
a set of rules for the resolver that operates on that context.
>
So, a program that is responsible for printing paychecks would
have a context *created* for it that contained:
     Clock -- something that can be queried for the current time/date
     Log -- a place to record its actions
     Printer -- a device that can materialize the paychecks
and, some *number* of:
     Paycheck -- a description of the payee and amount
i.e., the name "Paycheck" need not be unique in this context
(why artificially force each paycheck to have a unique name
just because you want to use an archaic namespace concept to
bind an identifier/name to each?  they're ALL just "paychecks")
>
     // resolve the objects governing the process
     theClock = MyContext=>resolve("Clock")
     theLog = MyContext=>resolve("Log")
     thePrinter = MyContext=>resolve("Printer")
     theDevice = thePrinter=>FriendlyName()
>
     // process each paycheck
     while( thePaycheck = MyContext=>resolve("Paycheck") ) {
         // get the parameters of interest for this paycheck
         thePayee = thePaycheck=>payee()
         theAmount = thePaycheck=>amount()
         theTime = theClock=>now()
>
         // print the check
         thePrinter=>write("Pay to the order of "
                           , thePayee
                           , "EXACTLY "
                           , stringify(theAmount)
                           )
>
         // make a record of the transaction
         theLog=>write("Drafted a disbursement to "
                       , thePayee
                       , " in the amount of "
                       , theAmount
                       , " at "
                       , theTime
                       , " printed on "
                       , FriendlyName
                       )
>
         // discard the processed paycheck
         MyContext=>unlink(thePaycheck)
      }
>
      // no more "Paycheck"s to process
 Shouldn't this be written in COBOL running on an IBM mainframe running
CICS (Customer Information Control System, a general-purpose
transaction processing subsystem for the z/OS operating system)?  This
is where the heavy lifting is done in such as payroll generation
systems.
It's an example constructed, on-the-fly, to illustrate how problems
are approached in my environment.  It makes clear that the developer
need not understand how any of these objects are implemented nor
have access to other methods that they likely support (i.e., someone
obviously had to *set* the payee and amount in each paycheck, had to
determine *which* physical printer would be used, what typeface would
be used to "write" on the device, etc.).
It also hides -- and provides flexibility to the implementor -- how
these things are done.  E.g., does the "payee()" method just return
a string embodied in the "paycheck" object?  Or, does it run a
query on a database to fetch that information?  Does theClock
reflect the time in the physical location where the printer resides?
Where the paycheck resides?  Or, some other place?  Is it indicated
in 24 hour time?  Which timezone?  Is DST observed, there?  etc.
(why should a developer have to know these things just to print paychecks?)

No need to run this process as a particular UID and configure
the "files" in the portion of the file system hierarchy that
you've set aside for its use -- hoping that no one ELSE will
be able to peek into that area and harvest this information.
>
No worry that the process might go rogue and try to access
something it shouldn't -- like the "password" file -- because
it can only access the objects for which it has been *given*
names and only manipulate each of those through the capabilities
that have been bound to those handle *instances* (i.e., someone
else, obviously, has the power to create their *contents*!)
>
This is conceptually much cleaner.  And, matches the way you
would describe "printing paychecks" to another individual.
 Maybe so, but conceptual clarity does not pay the rent or meet the
payroll.  Gotta get to the church on time.  Every time.
Conceptual clarity increases the likelihood that the *right* problem
is solved.  And, one only has to get to the church as promptly as
is tolerated; if payroll doesn't get done on Friday (as expected),
it still has value if the checks are printed on *monday* (though you
may have to compensate the recipients in some way and surely not
make a point of doing this often)

Pascal uses this exact approach.  The absence of true pointers is
crippling for hardware control, which is a big part of the reason that
C prevailed.
>
I don't eschew pointers.  Rather, if the object being referenced can
be remote, then a pointer is meaningless; what value should the pointer
have if the referenced object resides in some CPU at some address in
some address space at the end of a network cable?
>
Remote meaning accessed via a comms link or LAN is not done using RMIs
in my world - too slow and too asynchronous.  Round-trip transit delay
would kill you.  Also, not all messages need guaranteed delivery, and
it's expensive to provide that guarantee, so there need to be
distinctions.
>
Horses for courses.  "Real Time" only means that a deadline exists
for a task to complete.  It cares nothing about how "immediate" or
"often" such deadlines occur.  A deep space probe has deadlines
regarding when it must make orbital adjustment procedures
(flybys).  They may be YEARS in the future.  And, only a few
in number.  But, miss them and the "task" is botched.
 Actually, that is not what "realtime" means in real-world practice.
 The whole fetish about deadlines and deadline scheduling is an
academic fantasy.  The problem is that such systems are quite fragile
- if a deadline is missed, even slightly, the system collapses.
No.  That's a brittle system.  If you miss a single incoming missile,
the system doesn't collapse -- unless your defensive battery is targeted.
X, Y or Z may sustain losses.  But, you remain on-task to defend against
A, B or C suffering similar fate.
The "all or nothing" mindset is a naive approach to "real-time".
A tablet press produces ~200 tablets per minute.  Each minute.
For an 8 hour shift.
If I *miss* capturing the ACTUAL compression force that a specific
tablet experiences as it undergoes compression to a fixed geometry,
then I have no idea as to the actual weight of that particular tablet
(which corresponds with the amount of "actives" in the tablet).
I don't shutdown the tablet press because of such a missed deadline.
Rather, I mark that tablet as "unknown" and, depending on the
product being formulated, arrange for it to be discarded some number
of milliseconds later when it leaves the tablet press -- instead of
being "accepted" like those whose compression force was observed and
found to be within the nominal range to indicate a correct tablet
weight (you can't *weigh* individual tablets at those speeds).
You design the system -- mechanism, hardware and software -- so that
you can meet the deadlines that your process imposes.  And, hope to
maximize such compliance, relying on other mechanisms to correctly
handle the "exceptions" that might otherwise get through.
*IF* the mechanism that is intended to dispatch the rejected tablet
fails (indicated by your failing to "see" the tablet exit via the
"reject" chute), then you have a "dubious" tablet in a *barrel* of
otherwise acceptable tablets.  Isolating it would be time consuming
and costly.
So, use smaller barrels so the amount of product that you have to
discard in such an event is reduced.
Or, run the process slower to minimize the chance of such "double
faults" (the first being the failure to observe some critical
aspect of the tablet's production; the second the failure to
definitively discard such tablets).
Or, bitch to your provider that their product is failing to meet
its advertised specifications.

Which
is intolerable in practice, so there was always a path to handle the
occasional overrun gracefully.
One has to *know* that there was an overrun in order to know that
it has to be handled.  Most RTOSs have no mechanisms to detect such
overruns -- because they have no notion of the associated deadlines!
I chuckle at how so few systems with serial (EIA232) ports actually
*did* anything with overrun, framing, parity errors... you (your code)
KNOW the character extracted from the receiver is NOT, likely, the
character that you THINK it is, so why are you passing it along
up the stack?
Nowadays, similar mechanisms occur in network stacks.  Does
*anything* (higher level) know that the hardware is struggling?
That the interface has slipped into HDX mode?  That lots of
packets are being rejected?  What does the device *do* about
these things?  Or, do you just wait until the user gets
annoyed with the performance and tries to diagnose the problem,
himself?

Of course, if something happened mid file, you were now faced
with the problem of tracking WHERE you had progressed and
restarting from THAT spot, exactly.  In my implementation,
you just restart the program and it processes any "Paycheck"s
that haven't yet been unlinked from its namespace.
 Restart?  If you are implementing a defense against incoming
supersonic missiles, you just died.  RIP.
Only if *you* were targeted by said missile.  If some other asset
was struck, would you stop defending?
If the personnel responsible for bringing new supplies to the
defensive battery faltered, would you tell them not to bother
trying, again, to replenish those stores?
You seem to think all RT systems are missile defense systems.

For asynchronous services, I would create a separate thread just
to handle those replies as I wouldn't want my main thread having to
be "interrupted" by late arriving messages that *it* would have to
process.  The second thread could convert those messages into
flags (or other data) that the main thread could examine when it
NEEDED to know about those other activities.
>
E.g., the first invocation of the write() method on "thePrinter"
could have caused the process that *implements* that printer
to power up the printer.  While waiting for it to come on-line,
it could buffer the write() requests so that they would be ready
when the printer actually *did* come on-line.
 This is how some early OO systems worked, and the time to switch
contexts between processes was quite long, so long that it was unable
to scan the horizon for incoming missiles fast enough to matter.
That's a consequence of nearness of deadline vs. system resources
available.  Hardware is constantly getting faster.  Designing as
if the hardware will always deliver a specific level of performance
means you are constantly redesigning.  And, likely using obsolete
hardware because it was *specified* long before the system's deployment
date.
[When making tablets, its more efficient to have a single process
"track" each tablet instance from granulation feed, through precompression,
compression, ejection and dispatch than it is to have separate
processes for each of these "processes".  And, with tablet rates of
5ms, the time required for context switches would be just noise
but, still an issue worth addressing.  That's where system engineering
comes into play.]
When I toured NORAD (Cheyenne Mountain Complex) in the 80's, they
were in the process of installing hardware ordered in the 60's
(or so the tour guide indicated; believable given the overhead
in specifying, bidding, designing and implementing such systems).
Most RT systems have much shorter cycle times.  A few years from
conception to deployment is far more common; even less in the
consumer market.

War story:  Some years later, in the late 1980s, I was asked to assess
an academic operating system called Alpha for possible use in realtime
applications.  It was strictly synchronous.  Turned out that if you
made a typing mistake or the like, one could not stop the stream of
error message without doing a full reboot.  There was a Control-Z
command, but it could not be processed because the OS was otherwise
occupied with an endless loop.  Oops.  End of assessment.
>
*Jensen's* Alpha distributed processing across multiple domains.
So, "signals" had to chase the thread as it executed.  I have a
similar problem and rely on killing off a resource to notify
it's consumers of its death and, thus, terminate their execution.
 Hmm.  I think that Jensen's Alpha is the one in the war story.  We
were tipped off about Alpha's problem with runaway blather by one of
Jensen's competitors.
Jensen is essentially an academic.  Wonderful ideas but largely impractical
(on current hardware).  "Those that CAN, *do*; those that CAN'T, *teach*?"

Of course, it can never be instantaneous as there are finite transit
delays to get from one node (where part of the process may be executing)
to another, etc.
>
But, my applications are intended to be run-to-completion, not
interactive.
 And thus not suitable for essentially all realtime application.
Again, what proof of that?  Transit delays are much shorter
today than 5 years ago, 10 years ago, 40 years ago.  They'll
be even shorter in the future.
Just because they are unsuitable for *a* SPECIFIC RT problem
doesn't rule them out for "essentially all".  Clearly not
applicable to pulling bytes off a transport at 6 microsecond
intervals.  But, controlling a tablet press?  Or, a production
line?  CNC mill?
This is how folks get misled in their ideas wrt RT -- they focus
on "fast"/"frequent" tasks with short release-to-deadline
times and generalize this as being characteristic of ALL systems.
There is nothing to prevent me from aborting a process after it has
been started (I have a colleague who codes like this, intentionally...
get the process started, THEN decide if it should continue).  I
have to continuously adjust how resources are deployed in my
system so it can expand to handle additional tasks without
resorting to overprovisioning.
If, for example, I start to retrain the speech recognizer and
some other "responsibility" comes up (motion detected on the
camera monitoring the approach to the front door), it would be
silly to waste resources continuing that speech training (which
could be deferred as it's deadline is distant, in time, and
there is little decrease in value due to "lateness") at
the expense of hindering the recognition of the individual
approaching the door -- *he* likely isn't going to wait around
until I have time to sort out his identity!
So, kill the retraining task (it can be restarted from where
it left off because it was designed to be so) to immediately
free those resources for other use.  When they again become
available (here or on some other node that I may opt to
bring on-line *because* I see a need for more resources),
resume the retraining task.  When it completes, those resources
can be retired (made available to other tasks or nodes taken
offline to conserve power).

But, synchronous programming is far easier to debug as you don't
have to keep track of outstanding asynchronous requests that
might "return" at some arbitrary point in the future.  As the
device executing the method is not constrained by the realities
of the local client, there is no way to predict when it will
have a result available.
>
Well, the alternative is to use a different paradigm entirely, where
for every event type there is a dedicated responder, which takes the
appropriate course of action.  Mostly this does not involve any other
action type, but if necessary it is handled here.  Typically, the
overall architecture of this approach is a Finite State Machine.
>
But then you are constrained to having those *dedicated* agents.
What if a device goes down or is taken off line (maintenance)?
I address this by simply moving the object to another node that
has resources available to service its requests.
 One can design for such things, if needed.  It's called fault
tolerance (random breakage) or damage tolerance (also known as battle
damage).  But it's done in bespoke application code.
I have a more dynamic environment.  E.g., if power fails, which
physical nodes should I power down (as I have limited battery)?
For the nodes left operational, which tasks should be (re)deployed
to continue operation on them?  If someone (hostile actor?)
damages or interferes with a running node, how do I restore
the operations that were in progress on that node?

So, if the "Personnel" computer (above) had to go offline, I would
move all of the Paycheck objects to some other server that could
serve up "paycheck" objects.  The payroll program wouldn't be aware
of this as the handle for each "paycheck" would just resolve to
the same object but on a different server.
>
The advantage, here, is that you can draw on ALL system resources to
meet any demand instead of being constrained by the resources in
a particular "box".  E.g., my garage door opener can be tasked with
retraining the speech recognizer.  Or, controlling the HVAC!
 True, but not suited for many realtime applications.
In the IoT world, it is increasingly a requirement.  The current approach
of having little "islands" means that some more capable system must
be ALWAYS available to coordinate their activities and do any "heavy
lifting" for them.
The NEST thermostat has 64MB of memory and two processors.  And still
relies on a remote service to implement its intended functionality.
And, it can't "help" any other devices achieve THEIR goals even though
99% of its time is spent "doing nothing".  This leads to a binary decision
process:  do nodes share resources and cooperate to achieve goals that
exceed their individual capabilities?  Or, do they rely on some
external service for that "added value"?  (what happens when that
service is unavailable??)

This is driven by the fact that the real world has uncorrelated
events, capable of happening in any order, so no program that requires
that event be ordered can survive.
>
You only expect the first event you await to happen before you
*expect* the second.  That, because the second may have some (opaque)
dependence on the first.
>
Or, more commonly, be statistically uncorrelated random.  Like
airplanes flying into coverage as weather patterns drift by as flocks
of geese flap on by as ...
>
That depends on the process(es) being monitored/controlled.
E.g., in a tablet press, an individual tablet can't be compressed
until its granulation has been fed into it's die (mold).
And, can't be ejected until it has been compressed.
>
So, there is an inherent order in these events, regardless of when
they *appear* to occur.
>
Sure, someone could be printing paychecks while I'm making
tablets.  But, the two processes don't interact so one cares
nothing about the other.
 Yes for molding plastic, but what about the above described use cases,
where one cannot make any such assumption?
There are no universal solutions.  Apply your missile defense system to
controlling the temperature in a home -- clearly a RT task (as WHEN you
turn the HVAC on and off has direct consequences to the comfort of the
occupants -- that being the reason they *have* a thermostat!).  How
large/expensive will it be and how many man-years to deploy?  Its
specific notion of timeliness and consequences are silly in the
context of HVAC control.
OTOH, controlling an air handler that must maintain constant temperature
and humidity to a tablet coating process can't tolerate lateness as
it alters the "complexion" of the air entering the process chamber.
So, a batch of tablets could be rendered useless because the AHU
wasn't up to the requirements stated by the process chemist.

There is a benchmark for message-passing in realtime software where
there is ring of threads or processes passing message around the ring
any number of times.  This is modeled on the central structure of many
kinds of radar.
>
So, like most benchmarks, is of limited *general* use.
>
True, but what's the point?  It is general for that class of problems,
when the intent is to eliminate operating systems that cannot work for
a realtime system.
>
What *proof* do you have of that assertion?  RT systems have been built
and deployed with synchronous interfaces for decades.  Even if those
are *implied* (i.e., using a FIFO/pipe to connect two processes).
 The word "realtime" is wonderfully elastic, especially as used by
marketers.
 A better approach is by use cases.
 Classic test case.  Ownship is being approached by some number of
cruise missiles approaching at two or three time the speed of sound.
The ship is unaware of those missiles until they emerge from the
horizon. By the way, it will take a Mach 3 missile about thirty
seconds from detection to impact.  Now what?
Again with the missiles.  Have you done any RT systems OTHER than
missile defense?
Space probe is approaching Jupiter and scheduling the burn of its
main engine to adjust for orbital insertion.  It *knows* what it
has to do long before it gets to the "appointed time" (space).
Granted, it may have to fine tune the exact timing -- and the
length of the burn based on observations closer to the *deadline*.
And, it doesn't get a second chance -- a miss IS a mile!
Should it also worry about the possible presence of Klingons
nearby?
How you *define* RT shouldn't rely on use cases.  That should
be a refinement of the particular requirements of THAT use case
and not RT in general.
Witness tape drives, serial ports, tablet presses, deep space
probes.  ALL are RT problems.  Why is "missile defense" more
RT than any of these?  NORAD has a shitload of resources
dedicated to that task -- mainly because of the *cost* of
a missed deadline.  But, an ABS system has a similar cost to
the driver (or nearby people) if it fails to meet it's
performance deadline.
Cost/value is just one of many ORTHOGONAL issues in a RT system.
Deadline frequency, nearness, regularity, etc. are others.
How you approach a deep space probe problem is different than
how you approach capturing bottles on a conveyor belt.  But,
the same issues are present in each case.

There are also tests for how the thread scheduler
works, to see if one can respond immediately to an event, or must wait
until the scheduler makes a pass (waiting is forbidden).
>
Waiting is only required if the OS isn't preemptive.  Whether or
not it is "forbidden" is a function of the problem space being addressed.
 Again, it's not quite that simple, as many RTOSs are not preemptive,
but they are dedicated and quite fast.  But preemptive is common these
days.
They try to compensate by hoping the hardware is fast enough
to give the illusion of timeliness in their response.  Just
like folks use nonRT OS's to tackle RT tasks -- HOPING they
run fast enough to *not* miss deadlines or tweeking other
aspects of the design to eliminate/minimize the probability
of those issues mucking with the intended response.  E.g.,
multimedia disk drives that defer thermal recalibration
cycles to provide for higher sustained throughput.

There are
many perfectly fine operating system that will flunk these tests, and
yet are widely used.  But not for realtime.
>
Again, why not?  Real time only means a deadline exists.  It says
nothing about frequency, number, nearness, etc.  If the OS is
deterministic, then its behavior can be factored into the
solution.
 An OS can be deterministic, and still be unsuitable.  Many big compute
engine boxes have a scheduler that makes sweep once a second, and
their definition of RT is to sweep ten times a second.  Which is
lethal in many RT applications.  So use a more suitable OS.
That's the "use a fast computer" approach to sidestep the issue of being
suitable for RT use.  For something << slower, it could provide acceptable
performance.  But, would likely be very brittle; if the needs of the
problem increased or the load on the solution increased, it would
magically and mysteriously break.

"Hard" or "soft"?  If I *missed* the datum, it wasn't strictly
"lost"; it just meant that I had to do a "read reverse" to capture
it coming back under the head.  If I had to do this too often,
performance would likely have been deemed unacceptable.
>
OTOH, if I needed the data regardless of how long it took, then
such an approach *would* be tolerated.
 How would this handle a cruise missile?  One can ask it to back up and
try again, but it's unclear that the missile is listening.
A missile is an HRT case -- not because it has explosive capabilities
but because there is no "value" (to use a Jensen-ism) to addressing
the problem AFTER its deadline.  It is just as ineffective at
catching playing cards falling out of your hands (which have far less
consequences)
Too often, people treat THEIR problem as HRT -- we *must* handle ALL
of these events -- when there are alternatives that acknowledge the
real possibility that there are alternatives.  It's just *easier*
to make that claim as it gives you cover to request (require!)
more resources to meet all of those needs.
Apparently, israel is learning that they can't handle all the
incoming projectiles/vessels that they may encounter, now.
Poor system design?  Or, the adversary has realized that
there are limits to the defensive systems.... limits that they
can overwhelm.
Acknowledging (to yourself) that there are limits is empowering.
It lets you use the resources that you have to protect more
*value*.  Do you stop the incoming missile targeting your
ammunition dump, civilian lodging, military lodging or
financial sector?  Pick one because you (demonstrably) can't
defend *all*.
Resilient RT systems make these decisions dynamically (do I
finish retraining the synthesizer or recognizer the visitor?).
They do so by understanding the instantaneous limits to their
abilities and adjusting, accordingly.  (but, you have to be able
to quantify those limits, as they change!)

Date Sujet#  Auteur
29 Sep 24 * Duplicate identifiers in a single namespace24Don Y
29 Sep 24 +- Re: Duplicate identifiers in a single namespace1Bill Sloman
29 Sep 24 +* Re: Duplicate identifiers in a single namespace7Jeroen Belleman
29 Sep 24 i+* Re: Duplicate identifiers in a single namespace5Don Y
29 Sep 24 ii`* Re: Duplicate identifiers in a single namespace4Don Y
6 Oct 24 ii `* Re: Duplicate identifiers in a single namespace3Don Y
6 Oct 24 ii  `* Re: Duplicate identifiers in a single namespace2Don Y
10 Oct 24 ii   `- Re: Duplicate identifiers in a single namespace1Don Y
29 Sep 24 i`- Re: Duplicate identifiers in a single namespace1Cursitor Doom
6 Oct 24 +- Re: Duplicate identifiers in a single namespace1Don Y
16 Oct 24 `* Re: Duplicate identifiers in a single namespace14albert
16 Oct 24  +- Re: Duplicate identifiers in a single namespace1Don Y
16 Oct 24  `* Re: Duplicate identifiers in a single namespace12Joe Gwinn
16 Oct 24   `* Re: Duplicate identifiers in a single namespace11Don Y
17 Oct 24    `* Re: Duplicate identifiers in a single namespace10Joe Gwinn
17 Oct 24     `* Re: Duplicate identifiers in a single namespace9Don Y
19 Oct 24      `* Re: Duplicate identifiers in a single namespace8Joe Gwinn
20 Oct 24       `* Re: Duplicate identifiers in a single namespace7Don Y
20 Oct 24        `* Re: Duplicate identifiers in a single namespace6Joe Gwinn
21 Oct 24         `* Re: Duplicate identifiers in a single namespace5Don Y
21 Oct 24          +- Re: Duplicate identifiers in a single namespace1Don Y
23 Oct 24          `* Re: Duplicate identifiers in a single namespace3Joe Gwinn
23 Oct 24           `* Re: Duplicate identifiers in a single namespace2Don Y
23 Oct 24            `- Re: Duplicate identifiers in a single namespace1Don Y

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal