Re: Anticipating processor architectural evolution

Liste des GroupesRevenir à se design 
Sujet : Re: Anticipating processor architectural evolution
De : jjSNIPlarkin (at) *nospam* highNONOlandtechnology.com (John Larkin)
Groupes : sci.electronics.design
Date : 30. Apr 2024, 02:17:35
Autres entêtes
Organisation : Highland Tech
Message-ID : <hld03j1639adhp30vn43a93qduu77ga8v2@4ax.com>
References : 1 2 3 4
User-Agent : Forte Agent 3.1/32.783
On Mon, 29 Apr 2024 15:03:57 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 4/29/2024 12:19 PM, boB wrote:
Isn't this what Waferscale is, kinda ?
>
WSI has proven to be a dead-end (for all but specific niche markets
and folks with deep pockets).  "Here lie The Connection Machine,
The Transputer, etc."
>
Until recently, there haven't really been any mainstream uses for
massively parallel architectures (GPUs being the first real use and
their preemption for use in AI and Expert Systems)
>
To exploit an array of identical processors you typically need a
problem that can be decomposed into many "roughly comparable" (in
terms of complexity) tasks that have few interdependencies.

A PC doesn't solve massively parallel computational problems.

One CPU can be a disk file server. One, a keyboard handler. One for
the mouse. One can be the ethernet interface. One CPU for each
printer. One would be the "OS", managing all the rest.

Cheap CPUs can run idle much of the time.

We don't need to share one CPU doing everything any more. We don't
need virtual memory. If each CPU has a bit of RAM, we barely need
memory management.




>
Most problems are inherently serial and/or have lots of dependencies
that limit the amount of true parallelism that can be attained.
Or, have widely differing resource needs/complexity to make them
ill suited to being shoe-horned into a one-size-fits-all processor
model.  E.g., controlling a motor and recognizing faces have
vastly different computational requirements.
>
Communication is always the bottleneck in a processing application;
whether it be CPU to memory, task to task, thread to thread, etc.
It's also one of the ripest areas for bugs to creep into a design;
designing good "seams" (interfaces) is the biggest predictor of
success in any project of significance (that's why we have protection
domains, preach small modules, well defined interfaces, "contract"
programming style).
>
Sadly, few folks are formally taught about these interrelationships
(when was the last time you saw a Petri net?) so we have lots of
monolithic designs that are brittle due to having broken all the
Best Practices rules.
>
The smarter way of tackling increasingly complex problems is better
partitioning of hardware resources (with similarly architected
software atop) using FIFTY YEAR OLD protection mechanisms to enforce
the boundaries between "virtual processors".
>
This allows a processor having the capabilities required by the most
demanding "component" to be leveraged to, also, handle the needs of
those of lesser complexity.  It also gives you a speedy way of exchanging
information between those processors without requiring specialize
fabric for that task.
>
And, that SHARED mechanism is easily snooped to see who is talking to
whom (as well as prohibiting interactions that *shouldn't* occur!)
>
E.g., I effectively allow for the creation of virtual processors of
specific capabilities and resource allocations AS IF they were discrete
hardware units interconnected by <something>.  This lets me dole out
the fixed resources (memory, MIPS, time, watts) in the box to specific
uses and have "extra" for uses that require them.
>
(I can set a virtual processor to only have access to 64KB! -- or 16K
or 16MB -- of memory, only allow it to execute a million opcode fetches
per second, etc. and effectively have a tiny 8b CPU emulated within a
much more capable framework.  And, not be limited to moving data via
a serial port to other such processors!)

Why virtual processors, if real ones are cheap?


Date Sujet#  Auteur
28 Apr 24 * Anticipating processor architectural evolution8Don Y
28 Apr 24 +* Re: Anticipating processor architectural evolution6John Larkin
28 Apr 24 i+- Re: Anticipating processor architectural evolution1Bill Sloman
29 Apr 24 i`* Re: Anticipating processor architectural evolution4boB
30 Apr 24 i `* Re: Anticipating processor architectural evolution3Don Y
30 Apr 24 i  `* Re: Anticipating processor architectural evolution2John Larkin
30 Apr 24 i   `- Re: Anticipating processor architectural evolution1Bill Sloman
30 Apr 24 `- Re: Anticipating processor architectural evolution1john larkin

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal