Liste des Groupes | Revenir à c arch |
Lawrence D'Oliveiro wrote:Excepting for the 1 memory operand per instruction, the above para-Found this paper>
<https://gordonbell.azurewebsites.net/Digital/Bell_Retrospective_PDP11_paper_c1998.htm>
at Gordon Bell’s website. Talking about the VAX, which was designed as
the ultimate “kitchen-sink” architecture, with every conceivable
feature to make it easy for compilers (and humans) to generate code,
he explains:
>
The VAX was designed to run programs using the same amount of
memory as they occupied in a PDP-11. The VAX-11/780 memory range
was 256 Kbytes to 2 Mbytes. Thus, the pressure on the design was
to have very efficient encoding of programs. Very efficient
encoding of programs was achieved by having a large number of
instructions, including those for decimal arithmetic, string
handling, queue manipulation, and procedure calls. In essence, any
frequent operation, such as the instruction address calculations,
was put into the instruction-set. VAX became known as the
ultimate, Complex (Complete) Instruction Set Computer. The Intel
x86 architecture followed a similar evolution through various
address sizes and architectural fads.
>
The VAX project started roughly around the time the first RISC
concepts were being researched. Could the VAX have been designed as a
RISC architecture to begin with? Because not doing so meant that, just
over a decade later, RISC architectures took over the “real computer”
market and wiped the floor with DEC’s flagship architecture,
performance-wise.
>
The answer was no, the VAX could not have been done as a RISC
architecture. RISC wasn’t actually price-performance competitive until
the latter 1980s:
>
RISC didn’t cross over CISC until 1985. This occurred with the
availability of large SRAMs that could be used for caches. It
should be noted at the time the VAX-11/780 was introduced, DRAMs
were 4 Kbits and the 8 Kbyte cache used 1 Kbits SRAMs. Memory
sizes continued to improve following Moore’s Law, but it wasn’t
till 1985, that Reduced Instruction Set Computers could be built
in a cost-effective fashion using SRAM caches. In essence RISC
traded off cache memories built from SRAMs for the considerably
faster, and less expensive Read Only Memories that held the more
complex instructions of VAX (Bell, 1986).
If you look at the VAX 8800 or NVAX uArch you see that even in 1990 it
was
still taking multiple clocks to serially decode each instruction and
that basically stalls away any benefits a pipeline might have given.
>
If they had just only put in *the things they actually use*
(as show by DEC's own instruction usage stats from 1982),
and left out all the things that they rarely or never use,
it would have had 50 or so opcodes instead of 305,
at most one operand that addressed memory on arithmetic and logic
opcodes
with 3 address modes (register, register address, register offset
address)
instead of 0 to 5 variable length operands with 13 address modes each
(most combinations of which are either silly, redundant, or illegal).
Then they would have be able to parse instructions in one clock,If VAX had stuck with PDP-11 address modes and simply added the
which makes pipelining a possible consideration,
and simplifies the uArch so now it can all fit on one chip,
which allows it to complete with RISC.
The reason it was designed the way it was, was because DEC hadAs did most of academia at the time.
microcode and microprogramming on the brain.
In this 1975 paper Bell and Strecher say it over and over and over.Orthogonality, Regularity, Expressibility, ...
They were looking at the cpu design as one large parsing machine
and not as a set of parallel hardware tasks.
This was their mental mindset just before they started the VAX design:
>
What Have We Learned From PDP11, Bell Strecker, 1975
https://gordonbell.azurewebsites.net/Digital/Bell_Strecker_What_we%20_learned_fm_PDP-11c%207511.pdf
Les messages affichés proviennent d'usenet.