Liste des Groupes | Revenir à se design |
One has to remember who the *boss* is in any relationship.Nor could you convince them. They wanted to do things that were hard,War story: I used to run an Operating System Section, and one thing>
we needed to develop was hardware memory test programs for use in the
factory. We had a hell of a lot of trouble getting this done because
our programmers point-blank refused to do such test programs.
That remains true of *most* "programmers". It is not seen as
an essential part of their job.
important, and interesting. And look good on a resume.
Yup. I have to validate my DRAM test routines. How do you cause a DRAMActually, the customers often *required* us to inject faults to proveOne fine day, it occurred to me that the problem was that we were>
trying to use race horses to pull plows. So I went out to get the
human equivalent of a plow horse, one that was a tad autistic and so
would not be bored. This worked quite well. Fit the tool to the job.
I had a boss who would have approached it differently. He would have
had one of the technicians "compromise" their prototype/development
hardware. E.g., crack a few *individual* cores to render them ineffective
as storage media. Then, let the prima donnas chase mysterious bugs
in *their* code. Until they started to question the functionality of
the hardware. "Gee, sure would be nice if we could *prove* the
hardware was at fault. Otherwise, it *must* just be bugs in your code!"
that our spiffy fault-detection logic actually worked. It's a lot
harder than it looks.
I have one in the garage. Hard to bring myself to part with it...OTOH, I never had the opportunity to use a glass TTY until AFTERReal men used teletype machines, which required two real men to lift.
college (prior to that, everything was hardcopy output -- DECwriters,
Trendata 1200's, etc.)
I remember them well.
Likely the data points didn't change every second as they do withBTDT, though we fitted other curves to approximate such as trigAnd we needed multi-precision integer arithmetic for many things,>
using scaled binary to handle the needed precision and dynamic range.
Yes. It is still used (Q-notation) in cases where your code may not want
to rely on FPU support and/or has to run really fast. I make extensive
use of it in my gesture recognizer where I am trying to fit sampled
points to the *best* of N predefined curves, in a handful of milliseconds
(interactive interface).
functions.
Neither particular size nor complexity are required. Rather, thatWell, in the radar world, the signal and data processors would beIn the 1970s, there was no such thing as such an appliance.>
Anything that performed a fixed task. My first commercial product
was a microprocessor-based LORAN position plotter (mid 70's).
Now, we call them "deeply embedded" devices -- or, my preference,
"appliances".
fixed-task, but they were neither small nor simple.
It's an example constructed, on-the-fly, to illustrate how problemsA context is then just a bag of (name, object) tuples andShouldn't this be written in COBOL running on an IBM mainframe running
a set of rules for the resolver that operates on that context.
>
So, a program that is responsible for printing paychecks would
have a context *created* for it that contained:
Clock -- something that can be queried for the current time/date
Log -- a place to record its actions
Printer -- a device that can materialize the paychecks
and, some *number* of:
Paycheck -- a description of the payee and amount
i.e., the name "Paycheck" need not be unique in this context
(why artificially force each paycheck to have a unique name
just because you want to use an archaic namespace concept to
bind an identifier/name to each? they're ALL just "paychecks")
>
// resolve the objects governing the process
theClock = MyContext=>resolve("Clock")
theLog = MyContext=>resolve("Log")
thePrinter = MyContext=>resolve("Printer")
theDevice = thePrinter=>FriendlyName()
>
// process each paycheck
while( thePaycheck = MyContext=>resolve("Paycheck") ) {
// get the parameters of interest for this paycheck
thePayee = thePaycheck=>payee()
theAmount = thePaycheck=>amount()
theTime = theClock=>now()
>
// print the check
thePrinter=>write("Pay to the order of "
, thePayee
, "EXACTLY "
, stringify(theAmount)
)
>
// make a record of the transaction
theLog=>write("Drafted a disbursement to "
, thePayee
, " in the amount of "
, theAmount
, " at "
, theTime
, " printed on "
, FriendlyName
)
>
// discard the processed paycheck
MyContext=>unlink(thePaycheck)
}
>
// no more "Paycheck"s to process
CICS (Customer Information Control System, a general-purpose
transaction processing subsystem for the z/OS operating system)? This
is where the heavy lifting is done in such as payroll generation
systems.
Conceptual clarity increases the likelihood that the *right* problemNo need to run this process as a particular UID and configureMaybe so, but conceptual clarity does not pay the rent or meet the
the "files" in the portion of the file system hierarchy that
you've set aside for its use -- hoping that no one ELSE will
be able to peek into that area and harvest this information.
>
No worry that the process might go rogue and try to access
something it shouldn't -- like the "password" file -- because
it can only access the objects for which it has been *given*
names and only manipulate each of those through the capabilities
that have been bound to those handle *instances* (i.e., someone
else, obviously, has the power to create their *contents*!)
>
This is conceptually much cleaner. And, matches the way you
would describe "printing paychecks" to another individual.
payroll. Gotta get to the church on time. Every time.
No. That's a brittle system. If you miss a single incoming missile,Actually, that is not what "realtime" means in real-world practice.>>Pascal uses this exact approach. The absence of true pointers is>
crippling for hardware control, which is a big part of the reason that
C prevailed.
I don't eschew pointers. Rather, if the object being referenced can
be remote, then a pointer is meaningless; what value should the pointer
have if the referenced object resides in some CPU at some address in
some address space at the end of a network cable?
Remote meaning accessed via a comms link or LAN is not done using RMIs
in my world - too slow and too asynchronous. Round-trip transit delay
would kill you. Also, not all messages need guaranteed delivery, and
it's expensive to provide that guarantee, so there need to be
distinctions.
Horses for courses. "Real Time" only means that a deadline exists
for a task to complete. It cares nothing about how "immediate" or
"often" such deadlines occur. A deep space probe has deadlines
regarding when it must make orbital adjustment procedures
(flybys). They may be YEARS in the future. And, only a few
in number. But, miss them and the "task" is botched.
The whole fetish about deadlines and deadline scheduling is an
academic fantasy. The problem is that such systems are quite fragile
- if a deadline is missed, even slightly, the system collapses.
WhichOne has to *know* that there was an overrun in order to know that
is intolerable in practice, so there was always a path to handle the
occasional overrun gracefully.
Only if *you* were targeted by said missile. If some other assetOf course, if something happened mid file, you were now facedRestart? If you are implementing a defense against incoming
with the problem of tracking WHERE you had progressed and
restarting from THAT spot, exactly. In my implementation,
you just restart the program and it processes any "Paycheck"s
that haven't yet been unlinked from its namespace.
supersonic missiles, you just died. RIP.
That's a consequence of nearness of deadline vs. system resourcesFor asynchronous services, I would create a separate thread justThis is how some early OO systems worked, and the time to switch
to handle those replies as I wouldn't want my main thread having to
be "interrupted" by late arriving messages that *it* would have to
process. The second thread could convert those messages into
flags (or other data) that the main thread could examine when it
NEEDED to know about those other activities.
>
E.g., the first invocation of the write() method on "thePrinter"
could have caused the process that *implements* that printer
to power up the printer. While waiting for it to come on-line,
it could buffer the write() requests so that they would be ready
when the printer actually *did* come on-line.
contexts between processes was quite long, so long that it was unable
to scan the horizon for incoming missiles fast enough to matter.
Jensen is essentially an academic. Wonderful ideas but largely impracticalHmm. I think that Jensen's Alpha is the one in the war story. WeWar story: Some years later, in the late 1980s, I was asked to assess>
an academic operating system called Alpha for possible use in realtime
applications. It was strictly synchronous. Turned out that if you
made a typing mistake or the like, one could not stop the stream of
error message without doing a full reboot. There was a Control-Z
command, but it could not be processed because the OS was otherwise
occupied with an endless loop. Oops. End of assessment.
*Jensen's* Alpha distributed processing across multiple domains.
So, "signals" had to chase the thread as it executed. I have a
similar problem and rely on killing off a resource to notify
it's consumers of its death and, thus, terminate their execution.
were tipped off about Alpha's problem with runaway blather by one of
Jensen's competitors.
Again, what proof of that? Transit delays are much shorterOf course, it can never be instantaneous as there are finite transitAnd thus not suitable for essentially all realtime application.
delays to get from one node (where part of the process may be executing)
to another, etc.
>
But, my applications are intended to be run-to-completion, not
interactive.
I have a more dynamic environment. E.g., if power fails, whichOne can design for such things, if needed. It's called fault>But, synchronous programming is far easier to debug as you don't>
have to keep track of outstanding asynchronous requests that
might "return" at some arbitrary point in the future. As the
device executing the method is not constrained by the realities
of the local client, there is no way to predict when it will
have a result available.
Well, the alternative is to use a different paradigm entirely, where
for every event type there is a dedicated responder, which takes the
appropriate course of action. Mostly this does not involve any other
action type, but if necessary it is handled here. Typically, the
overall architecture of this approach is a Finite State Machine.
But then you are constrained to having those *dedicated* agents.
What if a device goes down or is taken off line (maintenance)?
I address this by simply moving the object to another node that
has resources available to service its requests.
tolerance (random breakage) or damage tolerance (also known as battle
damage). But it's done in bespoke application code.
In the IoT world, it is increasingly a requirement. The current approachSo, if the "Personnel" computer (above) had to go offline, I wouldTrue, but not suited for many realtime applications.
move all of the Paycheck objects to some other server that could
serve up "paycheck" objects. The payroll program wouldn't be aware
of this as the handle for each "paycheck" would just resolve to
the same object but on a different server.
>
The advantage, here, is that you can draw on ALL system resources to
meet any demand instead of being constrained by the resources in
a particular "box". E.g., my garage door opener can be tasked with
retraining the speech recognizer. Or, controlling the HVAC!
There are no universal solutions. Apply your missile defense system toYes for molding plastic, but what about the above described use cases,>>This is driven by the fact that the real world has uncorrelated>
events, capable of happening in any order, so no program that requires
that event be ordered can survive.
You only expect the first event you await to happen before you
*expect* the second. That, because the second may have some (opaque)
dependence on the first.
Or, more commonly, be statistically uncorrelated random. Like
airplanes flying into coverage as weather patterns drift by as flocks
of geese flap on by as ...
That depends on the process(es) being monitored/controlled.
E.g., in a tablet press, an individual tablet can't be compressed
until its granulation has been fed into it's die (mold).
And, can't be ejected until it has been compressed.
>
So, there is an inherent order in these events, regardless of when
they *appear* to occur.
>
Sure, someone could be printing paychecks while I'm making
tablets. But, the two processes don't interact so one cares
nothing about the other.
where one cannot make any such assumption?
Again with the missiles. Have you done any RT systems OTHER thanThe word "realtime" is wonderfully elastic, especially as used by>>There is a benchmark for message-passing in realtime software where>
there is ring of threads or processes passing message around the ring
any number of times. This is modeled on the central structure of many
kinds of radar.
So, like most benchmarks, is of limited *general* use.
True, but what's the point? It is general for that class of problems,
when the intent is to eliminate operating systems that cannot work for
a realtime system.
What *proof* do you have of that assertion? RT systems have been built
and deployed with synchronous interfaces for decades. Even if those
are *implied* (i.e., using a FIFO/pipe to connect two processes).
marketers.
A better approach is by use cases.
Classic test case. Ownship is being approached by some number of
cruise missiles approaching at two or three time the speed of sound.
The ship is unaware of those missiles until they emerge from the
horizon. By the way, it will take a Mach 3 missile about thirty
seconds from detection to impact. Now what?
They try to compensate by hoping the hardware is fast enoughAgain, it's not quite that simple, as many RTOSs are not preemptive,There are also tests for how the thread scheduler>
works, to see if one can respond immediately to an event, or must wait
until the scheduler makes a pass (waiting is forbidden).
Waiting is only required if the OS isn't preemptive. Whether or
not it is "forbidden" is a function of the problem space being addressed.
but they are dedicated and quite fast. But preemptive is common these
days.
That's the "use a fast computer" approach to sidestep the issue of beingAn OS can be deterministic, and still be unsuitable. Many big computeThere are>
many perfectly fine operating system that will flunk these tests, and
yet are widely used. But not for realtime.
Again, why not? Real time only means a deadline exists. It says
nothing about frequency, number, nearness, etc. If the OS is
deterministic, then its behavior can be factored into the
solution.
engine boxes have a scheduler that makes sweep once a second, and
their definition of RT is to sweep ten times a second. Which is
lethal in many RT applications. So use a more suitable OS.
A missile is an HRT case -- not because it has explosive capabilities"Hard" or "soft"? If I *missed* the datum, it wasn't strictlyHow would this handle a cruise missile? One can ask it to back up and
"lost"; it just meant that I had to do a "read reverse" to capture
it coming back under the head. If I had to do this too often,
performance would likely have been deemed unacceptable.
>
OTOH, if I needed the data regardless of how long it took, then
such an approach *would* be tolerated.
try again, but it's unclear that the missile is listening.
Les messages affichés proviennent d'usenet.