Liste des Groupes | Revenir à se design |
On 10/19/2024 3:26 PM, Joe Gwinn wrote:>Will an average *coder* (someone who has managed to figure out how to
get a program to run and proclaimed himself a coder thereafter)
see the differences in his "product" (i.e., the code he has written)
and the blemishes/shortcomings it contains?
Well, we had the developers we had, and the team was large enough that
they cannot all be superstars in any language.
And business targets "average performers" as the effort to hire
and retain "superstars" limits what the company can accomplish.
>I was still scratching my head about why Pascal was so different than>
C, so I looked for the original intent of the founders. Which I found
in the Introductions in the Pascal Report and K&R C: Pascal was
intended for teaching Computer Science students their first
programming language, while C was intended for implementing large
systems, like the Unix kernel.
Wirth maintained a KISS attitude in ALL of his endeavors. He
failed to see that requiring forward declarations wasn't really
making it any simpler /for the coders/. Compilers get written
and revised "a few times" but *used* thousands of times. Why
favor the compiler writer over the developer?
Because computers were quite expensive then (circa 1982), and so
Pascal was optimized to eliminate as much of the compiler task as
possible, given that teaching languages are used to solve toy
problems's, the focus being learning to program, not to deliver
efficient working code for something industrial-scale in nature.
I went to school in the mid 70's. Each *course* had its own
computer system (in addition to the school-wide "computing service")
because each professor had his own slant on how he wanted to
teach his courseware. We wrote code in Pascal, PL/1, LISP, Algol,
Fortran, SNOBOL, and a variety of "toy" languages designed to
illustrate specific concepts and OS approaches. I can't recall
compile time ever being an issue (but, the largest classes had
fewer than 400 students)
>Prior operating systems were all written in assembly code, and so were>
not portable between vendors, so Unix needed to be written in
something that could be ported, and yet was sufficient to implement a
OS kernel. Nor can one write an OS in Pascal.
You can write an OS in Pascal -- but with lots of "helper functions"
that defeat the purpose of the HLL's "safety mechanisms".
Yes, lots. They were generally written in assembler, and it was
estimated that about 20% of the code would have to be in assembly if
Pascal were used, based on a prior project that had done just that a
few years earlier.
Yes. The same is true of eeking out the last bits of performance
from OSs written in C. There are too many hardware oddities that
languages can't realistically address (without tying themselves
unduly to a particular architecture).
>The target computers were pretty spare, multiple Motorola 68000>
single-board computers in a VME crate or the like. I recall that a
one megahertz instruction rate was considered really fast then.
Even the 645 ran at ~500KHz (!). Yet, supported hundreds of users
doing all sorts of different tasks. (I think the 6180 ran at
~1MHz).
But, each of these could exploit the fact that users
don't consume all of the resources available /at any instant/
on a processor.
>
Contrast that with moving to the private sector and having
an 8b CPU hosting your development system (with dog slow
storage devices).
Much was made by the Pascal folk of the cost of software maintenance,>
but on the scale of a radar, maintenance was dominated by the
hardware, and software maintenance was a roundoff error on the total
cost of ownership. The electric bill was also larger.
There likely is less call for change in such an "appliance".
Devices with richer UIs tend to see more feature creep.
This was one of Wirth's pet peeves; the fact that "designers"
were just throwing features together instead of THINKING about
which were truly needed. E.g., Oberon looks like something
out of the 1980's...
>This did work - only something like 4% of Unix had to be written in
assembly, and it was simply rewritten for each new family of
computers. (Turned out to be 6%.)
The conclusion was to use C: It was designed for the implementation
of large realtime systems, while Pascal was designed as a teaching
language, and is somewhat slow and awkward for realtime systems,
forcing the use of various sidesteps, and much assembly code. Speed
and the ability to drive hardware directly are the dominant issues
controlling that part of development cost and risk that is sensitive
to choice of implementation language.
One can write reliable code in C. But, there has to be discipline
imposed (self or otherwise). Having an awareness of the underlying
hardware goes a long way to making this adjustment.
>
I had to write a driver for a PROM Programmer in Pascal. It was
a dreadful experience! And, required an entirely different
mindset. Things that you would do in C (or ASM) had incredibly
inefficient analogs in Pascal.
>
E.g., you could easily create an ASCII character for a particular
hex-digit and concatenate these to form a "byte"; then those
to form a word/address, etc. (imagine doing that for every byte
you have to ship across to the programmer!) In Pascal, you spent
all your time in call/return instead of actually doing any *work*!
>So the Pascal crowd fell silent, and C was chosen and successfully>
used.
>
The Ada Mandate was rescinded maybe ten years later. The ISO-OSI
mandate fell a year or so later, slain by TCP/IP.
I had to make a similar decision, early on. It's really easy to get
on a soapbox and preach how it *should* be done. But, if you expect
(and want) others to adopt and embelish your work, you have to choose
an implementation that they will accept, if not "embrace".
>
And, this without requiring scads of overhead (people and other
resources) to accomplish a particular goal.
>
Key in this is figuring out how to *hide* complexity so a user
(of varying degrees of capability across a wide spectrum) can
get something to work within the constraints you've laid out.
Hidden complexity is still complexity, with complex failure modes
rendered incomprehensible and random-looking to those unaware of
what's going on behind the pretty facade.
If you can't explain the bulk of a solution "seated, having a drink",
then it is too complex. "Complex is anything that doesn't fit in a
single brain".
Explain how the filesystem on <whatever> works, internally. How
does it layer onto storage media? How are "devices" hooked into it?
Abstract mechanisms like pipes? Where does buffering come into
play? ACLs?
This is "complex" because a legacy idea has been usurped to tie
all of these things together.
>I prefer to eliminate such complexity. And not to confuse the>
programmers, or treat them like children.
By picking good abstractions, you don't have to do either.
But, you can't retrofit those abstractions to existing
systems. And, too often, those systems have "precooked"
mindsets.
War story from the days of Fortran, when I was the operating system>
expert: I had just these debates with the top application software
guy, who claimed that all you needed was the top-level design of the
software to debug the code.
He had been struggling with a mysterious bug, where the code would
[hang] soon after launch, every time. Code inspection and path tracing
had all failed, for months. He challenged me to figure it out. I
figured it out in ten minutes, by using OS-level tools, which provide
access to a world completely unknown to the application software folk.
The problem was how the compiler handled subroutines referenced in one
module but not provided to the linker. Long story, but the resulting
actual execution path was unrelated to the design of application
software, and one had to see things in assembly to understand what was
happening.
(This war story has been repeated in one form or another many time
over the following years. Have kernel debugger, will travel.)
E.g., as I allow end users to write code (scripts), I can't
assume they understand things like operator precedence, cancellation,
etc. *I* have to address those issues in a way that allows them
to remain ignorant and still get the results they expect/desire.
>
The same applies to other "more advanced" levels of software
development; the more minutiae that the developer has to contend with,
the less happy he will be about the experience.
>
[E.g., I modified my compiler to support a syntax of the form:
handle=>method(arguments)
an homage to:
pointer->member(arguments)
where "handle" is an identifier (small integer) that uniquely references
an object in the local context /that may reside on another processor/
(which means the "pointer" approach is inappropriate) so the developer
doesn't have to deal with the RMI mechanisms.]
Pascal uses this exact approach. The absence of true pointers is
crippling for hardware control, which is a big part of the reason that
C prevailed.
I don't eschew pointers. Rather, if the object being referenced can
be remote, then a pointer is meaningless; what value should the pointer
have if the referenced object resides in some CPU at some address in
some address space at the end of a network cable?
I assume that RMI is Remote Module or Method Invocation. These are>
The latter. Like RPC (instead of IPC) but in an OOPS context.
inherently synchronous (like Ada rendezvous) and are crippling for>
realtime software of any complexity - the software soon ends up
deadlocked, with everybody waiting for everybody else to do something.
There is nothing that inherently *requires* an RMI to be synchronous.
This is only necessary if the return value is required, *there*.
E.g., actions that likely will take a fair bit of time to execute
are often more easily implemented as asynchronous invocations
(e.g., node127=>PowerOn()). But, these need to be few enough that the
developer can keep track of "outstanding business"; expecting every
remote interaction to be asynchronous means you end up having to catch
a wide variety of diverse replies and sort out how they correlate
with your requests (that are now "gone"). Many developers have a hard
time trying to deal with this decoupled cause-effect relationship...
especially if the result is a failure indication (How do I
recover now that I've already *finished* executing that bit of code?)
But, synchronous programming is far easier to debug as you don't
have to keep track of outstanding asynchronous requests that
might "return" at some arbitrary point in the future. As the
device executing the method is not constrained by the realities
of the local client, there is no way to predict when it will
have a result available.
This is driven by the fact that the real world has uncorrelated>
events, capable of happening in any order, so no program that requires
that event be ordered can survive.
You only expect the first event you await to happen before you
*expect* the second. That, because the second may have some (opaque)
dependence on the first.
There is a benchmark for message-passing in realtime software where>
there is ring of threads or processes passing message around the ring
any number of times. This is modeled on the central structure of many
kinds of radar.
So, like most benchmarks, is of limited *general* use.
Even one remote invocation will cause it to jam. As>
will sending a message to oneself. Only asynchronous message passing
will work.
Joe Gwinn
Les messages affichés proviennent d'usenet.