Sujet : Re: architectural goals, Byte Addressability And Beyond
De : mitchalsup (at) *nospam* aol.com (MitchAlsup1)
Groupes : comp.archDate : 04. Jun 2024, 02:32:58
Autres entêtes
Organisation : Rocksolid Light
Message-ID : <3c702903c4202e7960b0c49050f2a0e7@www.novabbs.org>
References : 1 2 3 4 5 6 7 8
User-Agent : Rocksolid Light
Lawrence D'Oliveiro wrote:
If you make such a strong statement, I assume that you have done a
thorough analysis of this feature for typical mainframe workloads and
can support your claims with benchmarks.
We already know the answer to that. It’s why RISC has taken over the computing world.
Oh Wait !?!
Remember that “mainframe workloads” are primarily I/O bound, not CPU-
bound. The whole concept of a “mainframe” arose in the era when CPU
time
was scarce and expensive, so you had all these intelligent I/O
peripherals that could be given sequences of operations to perform, with minimal
CPU
intervention. It was all about maximizing throughput (batch operation),
not minimizing latency (interactive operation).
One of the reasons those CPUs were microcoded was to allow I/O
activities
to have 50% of the compute power and 50% of the memory bandwidth. Thus,
from one set of HW logic one got 2 different computers, one designed
for
COBOL the other designed for I/O (of that era) sharing the same
expensive
lump of circuits.
Nowadays, the whole concept is obsolete. So the only thing keeping it a
viable business has to be marketing, not technical, reasons.
Microcode that "runs the instruction pipeline" is obsolete. And if
anyone slugged through the Nick Tredenic book they would understand why.
Microcode is still viable at the function unit level in converting FMUL
logic into performing FDIV and SQRT calculations at low added cost.