On 12/13/2024 2:55 AM, Martin Brown wrote:
On 12/12/2024 21:09, john larkin wrote:
On Thu, 12 Dec 2024 04:00:23 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
>
On 12/12/2024 2:59 AM, Martin Brown wrote:
Probably because it is *so* bug.
(typo for big but Freudian slip seems OK)
>
Once something becomes "complex" (i.e., too large to fit in a
single brain), it becomes difficult to understand the repercussions
of specific design decisions -- because you can't remember
EVERYTHING with which they interact.
>
Engineers design giant systrems - cars, airplanes, bridges, buildings
- with lots of parts, and nobody understands all the parts. And they
work first time.
In my lifetime (or of sufficiently common "lore"):
Hindenberg explosion
Tacoma Narrows Bridge collapse
Chernobyl reactor
Hyatt Regency walkway collapse
Apollo 1 fire
Apollo 13 O2 tank explosion
Space Shuttle Challenger
Space Shuttle Columbia
Skylab
Fukishima nuclear plant
Deepwater Horizon fire/"spill"
Doors falling out of airplanes
Titanic
BIG! chinese dam failure (no idea of name)
World Trade Center towers
Concorde
De Gaulle airport collapse
DC-10 engine falling off
Titan submersible implosion
All, obviously, software problems??
There are hundreds of years experience building large physical objects and customers can more or less understand engineering diagrams and now virtual 3D renderings of their new building made possible by software.
<
https://en.wikipedia.org/wiki/List_of_aircraft_structural_failures>
<
https://en.wikipedia.org/wiki/List_of_building_and_structure_collapses>
<
https://en.wikipedia.org/wiki/List_of_bridge_failures>
<
https://en.wikipedia.org/wiki/Dam_failure#List_of_major_dam_failures>
<
https://en.wikipedia.org/wiki/List_of_hydroelectric_power_station_failures>
<
https://en.wikipedia.org/wiki/List_of_thermal_power_station_failures>
<
https://en.wikipedia.org/wiki/List_of_catastrophic_collapses_of_broadcast_masts_and_towers>
Bias? Or sheer Ignorance?
It didn't stop someone during build phase connecting a high pressure steam pipe to a stairway handrail on one plant that I know of. Big engineering diagrams can also be confusing when loads of similar diameter pipes (and non-pipes) go through a partition.
Or, misplumb the bedside O2 supply at the hospital where SWMBO worked.
And, we won't discuss why notes were never taken at the M&M meetings
she attended. "Something wrong? On OUR part? No....."
Software is still in the medieval cathedral building era but without the make walls thicker just in case strategy. It is still a good heuristic that if it is still standing after 5 years then it was a good 'un.
And, unlike EVERYTHING physical, it doesn't wear out! Annoying that folks
can't seem to design hardware that performs the same 30 - 50 years later.
Must just be shitty designs that "fail"?
Ely cathedral on the fens and the crooked spire at Chesterfield are examples that didn't quite fall down but don't quite look as designed.
https://www.elycathedral.org
https://en.wikipedia.org/wiki/Church_of_St_Mary_and_All_Saints,_Chesterfield
Software is different, and it never works first time. Most programs
don't even compile first try.
Says the Programmer. I guess an admission of a lack of skill.
It is better if they don't compile at all until they are nearly correct. The more faults that are found at compile time the better. Static code analysis has done a lot to improve software quality in the past decade.
Lack of education is a big problem. Too easy to be a "programmer" without
having any real skillset -- beyond "Look, Ma, it (almost) works!" Kinda
like having a soldering iron and claiming to be an EE!
We quiz job applicants with really simple, disarming questions: How
do you sort a list? Then, watch to see HOW they reply. If they don't
*immediately* ask to better define the problem space but throw up
the name of a sort algorithm, we're pretty sure they're just
a programmer. So, we coax as much of that superficial knowledge from
them: how many sort algorithms can you name? how do they differ?
write the pseudocode for <pick_one>? Great, now write <another>?
Which is faster? (trick question) Why?
If they haven't mentioned any trees, we're SURE they're a programmer.
How would you use this algorithm to sort a list of integers? Based on
the third digit? Will the sort be stable? (do you even know what
that means?) variable length strings? A list with 1,000,000 entries?
1,000,000,000,000? What if you only have 25KB of working store?
How long will that take? How would you make it twice as fast? TEN
times faster? Programmers quickly fall by the wayside when you get
past the superficial knowledge needed to write X in language Y.
[And, if 'Y' is the language du jour, they're almost certainly a
programmer!]
Ask a programmer how much stack his code needs. Or, how big it is
(based solely on what he's committed to paper). "We need to know how
much memory to put in the device; installing a disk drive would be
foolhardy just to give you peace of mind with your estimate. We
need to order the parts NOW so manufacturing can start building product
and YOU can install your software as they are headed out the door..."
The big problem is that software developers get lumped with last minute changes caused by salesmen promising new features to customers and to hide hardware defects that electronics engineers left in and need to be remediated in software because manufacturing has already started.
Or, no one anticipated particular conditions, initially. And, came up
with a kludge to address them, after the fact.
A classic symbol of hardware designer's ignorance was the design of
the speech interface on some early video games. A CVSD was used, driven
by a single bit output by the processor.
Of course, don't add any hardware to HELP the processor; let it serialize
the data stream and clock. And, require it to do so at a constant
sample rate lest audible artifacts manifest.
So, the CPU sat in a very tight loop, fetching bytes from ROM, shifting them
out of the accumulator into the bit-wide output port and clocking the CVSD.
It's an interesting programming problem. Remember, the time between clocks
has to be constant, despite the number of bytes you may have to fetch and
serialize! (gee, you couldn't replace the output LATCH with a SHIFT REGISTER
that was clocked in hardware so the CPU just had to keep feeding it BYTES??)
"Duh, I'm just a hardware designer, ignorant of what my design will impose
on the software folks! But, look at how CHEAP it is!??"
[I *never* write code for someone else's hardware.]
Mission creep (or starting out with nothing even resembling a coherent self consistent requirements specification) is a big factor in large scale software
The latter. Software "specs" are typically bulleted feature lists.
A properly written spec nails down all of the major design decisions.
It TELLS the developer what he has to do, instead of leaving that
up to him to figure out while writing the code. Because it *defines*
the model that the developer must implement.
You should be able to write a comprehensive user/operating manual
from JUST the spec -- because the implementation is supposed to conform!
By comparison, hardware design is simple: anything outside the
"operating limits" can gleefully result in catastrophic failure.
"Well, of COURSE you can't apply 400 volts to the output connection!"
"Hmmm... spec says I need to get first and last name from user.
Do I expect the middle name to be entered as part of the first
name field? Or, last? And what about the any suffix(es)? How
large of a STATIC character array should I define for each?"
<
https://en.wikipedia.org/wiki/Hubert_Blaine_Wolfeschlegelsteinhausenbergerdorff_Sr.>
failures. We are stuck with the suits saying ship it and be damned we can always update the software later with something that actually works. Hardware tends to be immutable even when there is a significant fault present software is expected to kludge around it.
Or, the hardware is deliberately specified to underperform the
ORIGINAL needs of the product (let alone additional needs from
feeping creaturism).
I designed a device that had a built-in barcode reader (to read
identifiers off of blood samples). Aside from the photoreflective
sensor (HEDS-1000), I had one input pin to process the "video" stream.
No "dedicated barcode reader" that I could query for its results.
I would set the hardware to watch for a black-to-white transition
(as the label approaches the reader). Then, note the time of said
transition and reprogram the hardware to watch for the white-to-black
transition that should follow. Accumulate the times of all such
transitions for 20 characters (~160 edges). Then, convert the
times to bar/space "widths" and decode the corresponding characters
along with the resulting message.
With a 95% first-pass-read-rate at scanning speeds of 1 to 100 inches
per second.
On an 8b processor.
Allowing for ink spread, you could end up with 0.007 inch widths
that had to be resolved (0.00007 seconds between edges).
And, of course, there is nothing to prevent the user from actually
moving the label at 110, 125, 150, etc. inches per second! And,
nothing to prevent him from using the barcode reader when it
wasn't expected to be used! (No, you can't burden the workflow
with "Press button to scan barcode")
No, you can't crash. No you can't misinterpret commands and data
coming in/out over the serial ports while this is happening. Or, stop
refreshing the display, scanning the keypad, etc. PERFORMANCE can
degrade but you can't fail. And, definitely can't misread a
blood sample's identifier and assign the diagnosis to the wrong
patient! Or, misreport a previously store result.
[This became a bit of a game among fellow employees: "let's see if
we can crash Don's box!" Nope. You could grind everything to a halt
but, eventually, your arm would tire and everything would pick up
where it left off.]
Really? A total device cost of ~$300 (DM+DL) with a selling price
of $6K -- and you can't give me a secondary processor (MCU) to offload
these requirements??
Unfortunately most projects at universities are sufficiently small that anyone who is even reasonably good at programming can hack a solution out of the solid more quickly and without using the processes needed for large scale software development.
*COUNT* the number of times you've seen software TREATED as a science in
industry. Where is the formal, FIXED specification? What is the TEST
PLAN? Which components will be used? What are the qualifications of
the folks charged with these tasks?
[We had a 30 man team developing a printer. A *technician* was given
responsibility for writing the firmware -- because his EXPERIENCE
consisted of having a TRS-80 at home (clearly making him most
qualified to use the HP64000!) "And you guys are a $36B Fortune 500
TECH company???" (clearly some mistake with THAT assessment!)]
I could probably code a "Hello, world!" program that would run first
try.
That is the problem. Anything under about 3 man months you can get away with murder (and that means most university teaching projects). Things start to get a bit sticky when you are talking 3 man years and above.
I look at complexity, typically, in terms of KLoC. 10KLoC is a "school
project". E.g., an RTOS. At 50KLoC, you're starting to approach something
where MANAGING complexity becomes a DESIGN issue; i.e., HOW you solve
the problem is as important as the actual solution.
Programmers code quality quickly falls off as you exceed 10KLoC -- because
they haven't likely "planned their trip". Rather, they started off on
day 1 (maybe day 2?) and just wrote code thinking the destination would
be apparent, sooner or later, and they could always make deviations
from their original course to home in on that NEWLY RECOGNIZED destination.
"Have the builder start pouring the cement for the foundation -- we can't
afford to be late! I'll meet with the architect and figure out what sort
of house we'll actually be building!"
If you so despise software why are you using Spice and why are you not still cutting up bits of red and blue sticky tape?
Perhaps afraid it "won't work first time"!
Software mostly works and you have to learn to live with its quirks or write your own.
Too often, people are stuck with "consumer software" which, like most
consumer items, is shit -- designed to be cheap and replaceable.
(Why can't I upgrade my 2 year old TV -- instead of having to buy
ANOTHER one?? What do I do with THIS one??)
The whole "Agile"/XP mentality is just an acknowledgement that
the industry is now full of PROGRAMMERS. Think what hardware
design would be like if all you had were TECHNICIANS doing
the work? "Let's just keep trying things until something fits..."
There are 50-100 *distinct* computers in a modern car. Yet, we
don't hear about headlights suddenly turning off while driving at
night. Or, windows opening and closing on dogs' heads hanging out
them. Or, doors locking and unlocking, randomly. Or, the
infotainment system suddenly deciding to play Myron Floren
instead of the classical Jazz you'd selected.
The furnace in our home is 30+ years old. The microcontroller
inside it having run flawlessly 24/7/365 for all those years.
Ditto the microwave oven. VCRs commonly had 4 or 5 processors
and failures were typically hardware in nature. Even our TOASTER
has an MCU (cheaper than a bimetal strip?).
On the other hand, I replace inverters in LCD monitors, blown
power supplies, faulty connectors, etc. all the time! Isn't that
stuff that should have been PERFECTED, by now? Shoddy designs?
(Oh, you EXPECT those things to break. I see...)