Liste des Groupes | Revenir à c arch |
antispam@fricas.org (Waldek Hebisch) writes:Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:antispam@fricas.org (Waldek Hebisch) writes:>From my point of view main drawbacks of 286 is poor support for
large arrays and problem for Lisp-like system which have a lot
of small data structures and traverse then via pointers.
Yes. In the first case the segments are too small, in the latter case
there are too few segments (if you have one segment per object).
In the second case one can pack several objects into single
segment, so except for loct security properties this is not
a big problem.
If you go that way, you lose all the benefits of segments, and run
into the "segments too small" problem. Which you then want to
circumvent by using segment and offset in your addressing of the small
data structures, which leads to:
But there is a lot of loading segment registers
and slow loading is a problem.
...>Using 16-bit offsets for jumps inside procedure and
segment-offset pair for calls is likely to lead to better
or similar performance as purely 32-bit machine.
With the 80286's segments and their slowness, that is very doubtful.
The 8086 has branches with 8-bit offsets and branches and calls with
16-bit offsets. The 386 in 32-bit mode has branches with 8-bit
offsets and branches and calls with 32-bit offsets; if 16-bit offsets
for branches would be useful enough for performance, they could
instead have designed the longer branch length to be 16 bits, and
maybe a prefix for 32-bit branch offsets.
At that time Intel apparently wanted to avoid having too many
instructions.
Looking in my Pentium manual, the section on CALL has a 20 lines for
"call intersegment", "call gate" (with priviledge variants) and "call
to task" instructions, 10 of which probably already existed on the 286
(compared to 2 lines for "call near" instructions that existed on the
286), and the "Operation" section (the specification in pseudocode)
consumes about 4 pages, followed by a 1.5 page "Description" section.
9 of these 10 far call variants deal with protected-mode things, so
Intel obviously had no qualms about adding instruction variants. If
they instead had no protected mode, but some 32-bit support, including
the near call with 32-bit offset that I suggest, that would have
reduced the number of instruction variants.
I used Xenix on a 286 in 1986 or 1987; my impression is that programs>
were limited to 64KB code and 64KB data size, exactly the PDP-11 model
you denounce.
Maybe. I have seen many cases where sofware essentiallt "wastes"
good things offered by hardware.
Which "good things offered by hardware" do you see "wasted" by this
usage in Xenix?
To me this seems to be the only workable way to use
the 286 protected mode. Ok, the medium model (near data, far code)
may also have been somewhat workable, but looking at the cycle counts
for the protected-mode far calls on the Pentium (and on the 286 they
were probably even more costly), which start at 22 cycles for a "call
gate, same priviledge" (compared to 1 cycle on the Pentium for a
direct call near), one would strongly prefer the small model.
Every successful software used direct access to hardware because of>
performance; the rest waned. Using BIOS calls was just too slow.
Lotus 1-2-3 won out over VisiCalc and Multiplan by being faster from
writing directly to video.
For most early graphic cards direct screen access could be allowed
just by allocating appropriate segment. And most non-games
could gain good performance with better system interface.
I think that variaty of tricks used in games and their
popularity made protected mode system much less appealing
to vendors. And that discouraged work on better interfaces
for non-games.
MicroSoft and IBM invested lots of work in a 286 protected-mode
interface: OS/2 1.x. It was limited to the 286 at the insistence of
IBM, even though work started in August 1985, when they already knew
that the 386 was coming soon. OS/2 1.0 was released in April 1987,
1.5 years after the 386.
OS/2 1.x flopped, and by the time OS/2 was adjusted to the 386, it was
too late, so the 286 killed OS/2; here we have a case of a software
project being death-marched by tying itself to "good things offered by
hardware" (except that Microsoft defected from the death march after a
few years).
Meanwhile, Microsoft introduced Windows/386 in September 1987 (in
addition to the base (8086) variant of Windows 2.0, which was released
in December 1987), which used 386 protected mode and virtual 8086 mode
(which was missing in the "brain-damaged" (Bill Gates) 286). So
Windows completely ignored 286 protected mode. Windows eventually
became a big success.
Also, Microsoft started NT OS/2 in November 1988 to target the 386
while IBM was still working on 286 OS/2. Eventually Microsoft and IBM
parted ways, NT OS/2 became Windows NT, which is the starting point of
all remaining Windowses from Windows XP onwards.
Xenix, apart from OS/2 the only other notable protected-mode OS for
the 286, was ported to the 386 in 1987, after SCO secured "knowledge
from Microsoft insiders that Microsoft was no longer developing
Xenix", so SCO (or Microsoft) might have done it even earlier if the
commercial situation had been less muddled; in any case, Xenix jumped
the 286 ship ASAP.
The verdict is: The only good use of the 286 is as a faster 8086;
small memory model multi-tasking use is possible, but the 64KB
segments are so limiting that everybody who understood software either
decided to skip this twist (MicroSoft, except on their OS/2 death
march), or jumped ship ASAP (SCO).
More generally, vendors could release separate versions of
programs for 8086 and 286 but few did so.
Were there any who released software both as 8086 and a protected-mode
80286 variants? Microsoft/SCO with Xenix, anyone else?
And users having
only binaries wanted to use 8086 on their new systems which
led to heroic efforts like OS/2 DOS box and later Linux
dosemu. But integration of 8086 programs with protected
mode was solved too late for 286 model to gain traction
(and on 286 "DOS box" had to run in real mode, breaking
normal system protection).
Linux never ran on a 80286, and DOSemu uses the virtual 8086 mode,
which does not require heroic efforts AFAIK.
There was various segmented hardware around, first and foremost (for>
the designers of the 80286), the iAPX432. And as you write, all the
good reasons that resulted in segments on the iAPX432 also persisted
in the 80286. However, given the slowness of segmentation, only the
tiny (all in one segment), small (one segment for code and one for
data), and maybe medium memory models (one data segment) are
competetive in protected mode compared to real mode.
AFAICS that covered wast majority of programs during eighties.
The "vast majority" is not enough; if a key application like Lotus
1-2-3 or Wordperfect did not work on the DOS alternative, the DOS
alternative was not used. And Lotus 1-2-3 and Wordperfect certainly
did not limit themselves to 64KB of data.
Turbo Pascal offered only medium memory model
Acoording to Terje Mathiesen, it also offered the large memory model.
On its Wikipedia page, I find: "Besides allowing applications larger
than 64 KB, Byte in 1988 reported ... for version 4.0". So apparently
Turbo Pascal 4.0 introduced support for the large memory model in
1988.
Intel apparently assumed that programmers are willing to spend
extra work to get good performance and IMO this was right
as a general statement. Intel probably did not realize that
programmers will be very reluctant to spent work on security
features and in particular to spent work on making programs
fast in 286 protected mode.
80286 protected mode is never faster than real mode on the same CPU,
so the way to make programs fast on the 286 is to stick with real
mode; using the small memory model is an alternative, but as
mentioned, the memory limits are too restrictive.
Intel probably assumend that 286 would cover most needs,
As far as protected mode was concerned, they hardly could have been
more wrong.
especially
given that most system had much less memory than 16 MB theoreticlly
allowed by 286.
They provided 24 address pins, so they obviously assumed that there
would be 80286 systems with >8MB. 64KB segments are already too
limiting on systems with 1MB (which was supported by the 8086),
probably even for anything beyond 128KB.
IMO this is partially true: there
is a class of programs which with some work fit into medium
model, but using flat address space is easier. I think that
on 286 (that is with 16 bit bus) those programs (assuming enough
tuning) run faster than flat 32-bit version.
Maybe in real mode. Certainly not in protected mode. Just run your
tuned large-model protected-mode program against a 32-bit small-model
program for the same task on a 386SX (which is reported as having a
very similar speed to the 80286 on 16-bit programs).
And even if you
find one case where the protected-mode program wins, nobody found it
worth their time to do this nonsense.
And so OS/2 flopped despite
being backed by IBM and, until 1990, Microsoft.
But I think that Intel segmentation had some
attractive features during eighties.
You are one of a tiny minority. Even Intel finally saw the light, as
did everybody else, and nowadays segments are just a bad memory.
Les messages affichés proviennent d'usenet.