Sujet : Re: Why VAX Was the Ultimate CISC and Not RISC
De : theom+news (at) *nospam* chiark.greenend.org.uk (Theo)
Groupes : comp.archDate : 05. Mar 2025, 12:58:35
Autres entêtes
Organisation : University of Cambridge, England
Message-ID : <ZFe*A8G8z@news.chiark.greenend.org.uk>
References : 1
User-Agent : tin/1.8.3-20070201 ("Scotasay") (UNIX) (Linux/5.10.0-28-amd64 (x86_64))
Lawrence D'Oliveiro <
ldo@nz.invalid> wrote:
The answer was no, the VAX could not have been done as a RISC
architecture. RISC wasn’t actually price-performance competitive until
the latter 1980s:
RISC didn’t cross over CISC until 1985. This occurred with the
availability of large SRAMs that could be used for caches. It
should be noted at the time the VAX-11/780 was introduced, DRAMs
were 4 Kbits and the 8 Kbyte cache used 1 Kbits SRAMs. Memory
sizes continued to improve following Moore’s Law, but it wasn’t
till 1985, that Reduced Instruction Set Computers could be built
in a cost-effective fashion using SRAM caches. In essence RISC
traded off cache memories built from SRAMs for the considerably
faster, and less expensive Read Only Memories that held the more
complex instructions of VAX (Bell, 1986).
ARM2 had no caches, but was still table-topping in its era.
The thing often missed in the CISC v RISC debate is the cost of main memory.
In the 1970s DRAM (or discrete SRAM) was very expensive. So you want a very
tight instruction encoding that is maximally expressive - resulting in
complex microcode and many-cycle instructions. Effectively the microcode
was a table of library functions and the assembly was more like a series of
API calls.
In the mid 1980s (~1984) the Japanese had entered the DRAM market which
caused the price of DRAMs to fall dramatically. That meant you could have a
RISC CPU which was more profligate with its instruction encoding but could
have a much simplified pipeline and so much better IPC. You didn't need
to have the microcode library any more, you could just let the compiler do
it. Also memory bandwidth had improved, allowing better feeding of a more
profligate CPU (and compilers had got better too)
In the late 1980s process improvements meant that on-die caches had become
more affordable, which assisted memory bandwidth and latency further.
Theo