antispam@fricas.org (Waldek Hebisch) writes:
From my point of view main drawbacks of 286 is poor support for
large arrays and problem for Lisp-like system which have a lot
of small data structures and traverse then via pointers.
Yes. In the first case the segments are too small, in the latter case
there are too few segments (if you have one segment per object).
Concerning code "model", I think that Intel assumend that
most procedures would fit in a single segment and that
average procedure will be of order of single kilobytes.
Using 16-bit offsets for jumps inside procedure and
segment-offset pair for calls is likely to lead to better
or similar performance as purely 32-bit machine.
With the 80286's segments and their slowness, that is very doubtful.
The 8086 has branches with 8-bit offsets and branches and calls with
16-bit offsets. The 386 in 32-bit mode has branches with 8-bit
offsets and branches and calls with 32-bit offsets; if 16-bit offsets
for branches would be useful enough for performance, they could
instead have designed the longer branch length to be 16 bits, and
maybe a prefix for 32-bit branch offsets. That would be faster than
what you outline, as soon as one call happens. But apparently 16-bit
branches are not that beneficial, or they would have gone that way on
the 386.
Another usage of segments for code would be to put the code segment of
a shared object (known as DLL among Windowsheads) in a segment, and
use far calls to call functions in other shared objects, while using
near calls within a shared object. This allows to share the code
segments between different programs, and to locate them anywhere in
physical memory. However, AFAIK shared objects were not a thing in
the 80286 timeframe; Unix only got them in the late 1980s.
I used Xenix on a 286 in 1986 or 1987; my impression is that programs
were limited to 64KB code and 64KB data size, exactly the PDP-11 model
you denounce.
What went wrong? IIUC there were several control systems
using 286 features, so there was some success. But PC-s
became main user of x86 chips and significant fraction
of PC-s was used for gaming. Game authors wanted direct
access to hardware which in case of 286 forced real mode.
Every successful software used direct access to hardware because of
performance; the rest waned. Using BIOS calls was just too slow.
Lotus 1-2-3 won out over VisiCalc and Multiplan by being faster from
writing directly to video.
But IIUC first paging Unix appeared _after_ release of 286.
From
<
https://en.wikipedia.org/wiki/History_of_the_Berkeley_Software_Distribution#3BSD>:
|The kernel of 32V was largely rewritten by Berkeley graduate student
|Özalp Babaoğlu to include a virtual memory implementation, and a
|complete operating system including the new kernel, ports of the 2BSD
|utilities to the VAX, and the utilities from 32V was released as 3BSD
|at the end of 1979.
The 80286 was introduced on February 1, 1982.
In 286 time Multics was highly regarded and it heavily depended
on segmentaion. MVS was using paging hardware, but was
talking about segments, except for that MVS segmentation
was flawed because some addresses far outside a segment were
considered as part of different segment. I think that also
in VMS there was some taliking about segments. So creators
of 286 could believe that they are providing "right thing"
and not a fake possible with paging hardware.
There was various segmented hardware around, first and foremost (for
the designers of the 80286), the iAPX432. And as you write, all the
good reasons that resulted in segments on the iAPX432 also persisted
in the 80286. However, given the slowness of segmentation, only the
tiny (all in one segment), small (one segment for code and one for
data), and maybe medium memory models (one data segment) are
competetive in protected mode compared to real mode.
So if they really had wanted protected mode to succeed, they should
have designed in 32-bit data segments (and maybe also 32-bit code
segments). Alternatively, if protected mode and the 32-bit addresses
do not fit in the 286 transistor budget, a CPU that implements the
32-bit feature and leaves away protected mode would have been more
popular than the 80286; and (depending on how the 32-bit extension was
implemented) it might have been a better stepping stone towards the
kind of CPU with protected mode that they imagined; but the alt-386
designers probably would not have designed in this kind of protected
mode that they did.
Concerning paging, all these scenarios are without paging. Paging was
primarily a virtual-memory feature, not a memory-protection feature.
It acquired memory protection only as far as it was easy with pages
(i.e., at page granularity). So paging was not designed as a
competition to segments as far as protection was concerned. If
computer architects had managed to design segmentation with
competetive performance, we would be seeing hardware with both paging
and segmentation nowadays. Or maybe even without paging, now that
memories tend to be big enough to make virtual memory mostly
unnecessary.
And I do not think thay could make
32-bit processor with segmentation in available transistor
buget,
Maybe not.
and even it they managed it would be slowed down by too
long addresses (segment + 32-bit offset).
On the contrary, every program that does not fit in the medium memory
model on the 80286 would run at least as fast on such a CPU in real
mode and significantly faster in protected mode.
- anton
-- 'Anyone trying for "industrial quality" ISA should avoid undefined behavior.' Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>