Sujet : Re: Rewriting SSA. Is This A Chance For GNU/Linux?
De : tnp (at) *nospam* invalid.invalid (The Natural Philosopher)
Groupes : comp.os.linux.advocacy comp.os.linux.miscDate : 05. Apr 2025, 12:13:06
Autres entêtes
Organisation : A little, after lunch
Message-ID : <vsr383$2421k$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
User-Agent : Mozilla Thunderbird
On 05/04/2025 09:50, c186282 wrote:
On 4/4/25 3:15 PM, Farley Flud wrote:
On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:
>
>
I'm not sure there are any little old ladies left to knit magnetic core.
>
Last time I looked they were little asian ladies wit teeny nimble fingers.
Did all the coil winding in that factory.
>
Look up "rope memory" :-)
>
Hey, it flew us to the moon ...
>
>
Who would ever give a flying fuck about this "Neolithic" technical
crap? It's the future that is of concern.
No future without a past.
And past tricks/thinking/strategies CAN inspire
the new.
Indeed.
Many ideas that were infeasible, become feasible with new technology.
Many dont. Windmills being a prime example...
My question has always been: when are these memory engineers (or
whatever they are called) going to produce cheap RAM memory that
can actually keep pace with the CPU?
Never ...
The problem is distance between elements times the size of elements divided by the speed of light.
It means that you need to start going 3D on memory to keep the speed/capacity withiong bounds.
It has its parallel in human political structures. Huge monolithic empires like the USSR simply fail to keep up, because the information from the bottom takes a long time to get to the top.
A far better solution is the old British Empire, with governors having power over a local nation, and very few decisions being centralised.
OK, Moore's Law is getting close to stalling-out CPU
performance. Ergo, give it a few years, the memory
MAY finally catch up.
Same laws are governing both. What is happening is more local on chip cache.
IIRC the RP2040 has 256K of memory *on the chip itself*.
That's local. Hence as fast as the chip is.
For decades we have had to use various levels of high speed, though
minuscule, cache memory in order for our software to run, and from
a programming point of view cache management is a supreme bitch.
>
The world needs cheap RAM that can operate at CPU speeds. Then,
all programming would be a supreme breeze.
It already has it, juts not in the sizes you want. Because of the propagation delay inherently in large arrays.
We can clock the CPUS up to 4Ghz ± mainly because we can make em down to 10nm element size.
Below that you start to get into quantum effects and low yields.
DDR5 RAM is pushing 3GHz speeds
Well, your 'future' isn't providing. MAYbe some odd
idea from the past, just on better hardware ???
The better idea is to look at what the actual problems are, and design massively parallel solutions to them that do not require a single processor running blindingly fast.
Cache memory is just another crutch, and its existence is indisputable
testimony that modern PC hardware is crippled shit.
Well, I'd argue that on-chip cache is always gonna
outperform - if for no other reason than the short
circuit paths. These days, the speed of electricity
over wires is becoming increasingly annoying - it's
why they want photonics instead. Of course soon even
that will be too slow soon enough ... and you can
complain to Einstein .......
Cant yet beat speed of light. Photonics is not much faster than electronics.
Back in the day we measured delay on a reel of 50 ohm coax. It was about .95 the speed of light IIRC.
So that isn't the way to go,
Look, you need to study the history of engineering.
Let's take a steam engine. Early engines crude, inefficient and very heavy and large. Maybe 1% efficient.
Roll forward to the first locomotives and still heavy but now getting 5% efficiency..
Fiddle with that for a hundred years and the final efficiency of a steam piston engine approached the theoretical limits of the technology without cooling or superheated steam of around 20%. Now use superheated steam in a steam turbine with a condenser strapped on the back - suitable for ships or power stations, - and you are getting up to 37%.
But that is fundamentally it. There is a law governing it
Efficiency is (Steam temperature IN - Steam temperature OUT)/(STEAM TEMPERATURE IN) (degrees absolute)
so for 200°C in and say 100°C out TIN = 673°A, TOUT =373°A gives a max thermal efficiency of 45%
You simply will never do better than that with water as the working fluid unless you go to horrendous inlet temps of superheated steam
The point is that every technology has a limit beyond which no amount of tinkering is going to get you. Engineers come to understand this, the lay public do not. They are always whining 'why cant you power the universe form one single bonfire'
Digital computing has a little ways to go, but it is already close to the limits
For some problems, precision analogue might be faster...
-- Climate Change: Socialism wearing a lab coat.