Liste des Groupes | Revenir à se design |
On Sun, 6 Jul 2025 04:58:09 -0700, Don Y <blockedofcourse@foo.invalid>So all tasks are created equal? And dedicating a CPU to every last one of them isn't an over-kill for most of them?
wrote:
Since CPU cores are trivial nowadays - they cost a few cents each -In a previous life I had quite huge a T800 Tranputer cluster and also>
did some designs that connected to it.
The ISA bus was not important, but there was a link adaptor
chip (C11? - where is my bottle of Gerontol Forte?) that had a
SRAM-alike "foreign" side that made it easy to handle.
>
In
<https://www.flickr.com/photos/137684711@N07/52631074700/in/datetaken/lightbox/ >
the link chip is between the Western Digital SCSI controller and the
VLSI serial/par IO chip.
>
Complete industrial PC/AT with Multibus2, lots of DRAM, disks, floppy, ...
Thanks Goddess I had someone to do the board layout in DOS Orcad STD
on a Compaq 286 :-)
>
Occam was fun. Maybe nowadays it would make a bigger impact with a
substantial number of CPUs on a chip.
But there have been countless (for small values of countless) concurrent
and parallel programming languages (as well as languages with memory
models that can usurp that ability).
>
People seem largely incapable of decomposing "programs" into concurrent
activities *within* a language and, instead, seem to rely on mechanisms
outside the language (e.g., OS-hosted). My take on it is that
fine-grained concurrency is "too much detail" for most developers to
manage (except on special case applications).
>
[Of course, applications that are inherently SIMD/MIMD can be special-cased.
But, the market has a sh*tload of applications that aren't so obviously so
and should be able to benefit from concurrency and parallelism. Designing
an application to fit WELL a multicore processor is a lot harder than it
seems it should be!]
>
Hence, we let compilers sort out where things can happen "in parallel"
and free ourselves from that minutiae. Looking at parallelism/concurrency
in the model *design* at a higher level of abstraction, instead.
>
As for the transputer hardware, it seemed to not provide enough, soon enough.
>
Another idea that was bulldozed away by less sophisticated -- but more
widely available -- solutions.
>
[E.g., why did the "pure" memory segmentation model fail to evolve beyond
the limited implementations initially offered? Why paged MMUs? etc.]
the transputer concept may make sense again. We rely on an OS and
compiler tricks to get apparent parallelism, and the price is
complexity and bugs.
Why not have a CPU per task? Each with a decent chunk of dedicated
fast ram?
Les messages affichés proviennent d'usenet.