Anton Ertl <
anton@mips.complang.tuwien.ac.at> wrote:
David Brown <david.brown@hesbynett.no> writes:
On 04/10/2024 19:30, Anton Ertl wrote:
David Brown <david.brown@hesbynett.no> writes:
On 04/10/2024 00:17, Lawrence D'Oliveiro wrote:
Compare this with the pain the x86 world went through, over a much longer
time, to move to 32-bit.
>
The x86 started from 8-bit roots, and increased width over time, which
is a very different path.
Still, the question is why they did the 286 (released 1982) with its
protected mode instead of adding IA-32 to the architecture, maybe at
the start with a 386SX-like package and with real-mode only, or with
the MMU in a separate chip (like the 68020/68551).
>
I can only guess the obvious - it is what some big customer(s) were
asking for. Maybe Intel didn't see the need for 32-bit computing in the
markets they were targeting, or at least didn't see it as worth the cost.
Anyone could see the problems that the PDP-11 had with its 16-bit
limitation. Intel saw it in the iAPX 432 starting in 1975. It is
obvious that, as soon as memory grows beyond 64KB (and already the
8086 catered for that), the protected mode of the 80286 would be more
of a hindrance than even the real mode of the 8086. I find it hard to
believe that many customers would ask Intel for something the 80286
protected mode with segments limited to 64KB, and even if, that Intel
would listen to them. This looks much more like an idee fixe to me
that one or more of the 286 project leaders had, and all customer
input was made to fit into this idea, or was ignored.
From my point of view main drawbacks of 286 is poor support for
large arrays and problem for Lisp-like system which have a lot
of small data structures and traverse then via pointers.
However, playing devil's advocate I can see sense in 286. IMO
Intel targeted quite a diffferent market. IIUC main intended marker
for 8086 were industial control and various embedded aplication.
286 was probably intenended for similar markets, but with stronger
emphasis on security. In control application it is typical to
have several cooperating processes. 286 allows separate local
descriptor tables for each task, so mutitasking program easily
may have say 30000 descriptors. Trying to get similar number
of separately protected objects using paging would require
similar number of pages, which with 16 MB total address space
leads to 512 byte pages. For smaller paged systems situation
is even worse: with 512 kB of memory 512 byte pages lead to
1024 pages in total which means that access control can not
be very granular and one would get significant memory
fragmentation for parts smaller than page. I can guess that
Intel rejected very small pages as problematic in implementation.
So if the goal is fine grained access control, then segementation
for machine of size of 286 looks better than paging.
Concerning code "model", I think that Intel assumend that
most procedures would fit in a single segment and that
average procedure will be of order of single kilobytes.
Using 16-bit offsets for jumps inside procedure and
segment-offset pair for calls is likely to lead to better
or similar performance as purely 32-bit machine. For
control applications it is likely that each procedure
will access moderate number of segments and total amount
of accessed data will be moderate. In other words, Intel
probably considerd "mostly medium" model where procedure
mainly accesses it data segment using just 16-bit offsets
and occasionally accesses other segments.
Compared to PDP-11 this leads to resonably natural
code that use some hundreds of kilobytes of memory,
much better than 128 kB limit of PDP-11 with separate
code and data areas. And segment maniputlation allows
also bigger programs.
What went wrong? IIUC there were several control systems
using 286 features, so there was some success. But PC-s
became main user of x86 chips and significant fraction
of PC-s was used for gaming. Game authors wanted direct
access to hardware which in case of 286 forced real mode.
Also, for long time 8088 played mayor role and PC software
"had" to run on 8088. Software vendors theoretically could
release separate versions for each processor or do some
runtime switching of critical procedures, but easiest way
was to depend on compatibility with 8088. "Better" OS-es
went Unix way, depending on paging and not using segmentation.
But IIUC first paging Unix appeared _after_ release of 286.
In 286 time Multics was highly regarded and it heavily depended
on segmentaion. MVS was using paging hardware, but was
talking about segments, except for that MVS segmentation
was flawed because some addresses far outside a segment were
considered as part of different segment. I think that also
in VMS there was some taliking about segments. So creators
of 286 could believe that they are providing "right thing"
and not a fake possible with paging hardware.
Concerning the cost, ther 80286 has 134,000 transistors, compared to
supposedly 68,000 for the 68000, and the 190,000 of the 68020. I am
sure that Intel could have managed a 32-bit 8086 (maybe even with the
nice addressing modes that the 386 has in 32-bit mode) with those
134,000 transistors if Motorola could build the 68000 with half of
that.
I think that Intel could manage to build "mostly" 32-bit processor
in transistor budget of 8086, that is have say 7 registers 32-bit
each, where each register could be treated as a pair of 16-bit
registers and 32-bit operations would take twice as much time
as 16-bit operation. But I think that such processor would be
slower (say 10% slower) than 8086 mostly because of more need to
use longer addresses. Similarly, hypotetical 32-bit 286 would
be slower than real 286. And I do not think thay could make
32-bit processor with segmentation in available transistor
buget, and even it they managed it would be slowed down by too
long addresses (segment + 32-bit offset).
-- Waldek Hebisch