Sujet : Re: Why VAX Was the Ultimate CISC and Not RISC
De : johnl (at) *nospam* taugh.com (John Levine)
Groupes : comp.archDate : 01. Mar 2025, 21:46:29
Autres entêtes
Organisation : Taughannock Networks
Message-ID : <vpvrn5$2hq0$1@gal.iecc.com>
References : 1 2
User-Agent : trn 4.0-test77 (Sep 1, 2010)
According to Anton Ertl <
anton@mips.complang.tuwien.ac.at>:
The answer was no, the VAX could not have been done as a RISC
architecture. RISC wasn’t actually price-performance competitive until
the latter 1980s:
>
RISC didn’t cross over CISC until 1985. This occurred with the
availability of large SRAMs that could be used for caches.
>
Like other USA-based computer architects, Bell ignores ARM, which
outperformed the VAX without using caches and was much easier to
design.
That's not a fair comparison. VAX design started in 1975 and shipped in 1978.
The first ARM design started in 1983 with working silicon in 1985. It was a
decade later.
On the other hand, I think some things were shortsighted even at the time. As
Bell's paper said, they knew about Moore's law but didn't believe it. If they
believed it they could have made the instructions a little less dense and a lot
easier to decode and pipeline. STRETCH did pipelining in the 1950s so they
should have been aware of it and considered that future machines could use it.
As someeone else noted, they had microcode on the brain and the VAX instruction
set is clearly designed to be decoded by microcode one byte at a time. Address
modes can have side-effects so you have to decode them serially or have a big
honking hazard scheme. They probably also assumed that microcode ROM would
be faster than RAM which even in 1975 was not particularly true. Rather than
putting every possible instruction into microcode, have a fast subroutine call
and make them subsroutines which can be cached and pipelined.
-- Regards,John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",Please consider the environment before reading this e-mail. https://jl.ly