On Sun, 5 Jan 2025 21:49:20 -0000 (UTC),
antispam@fricas.org (Waldek
Hebisch) wrote:
Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
Meanwhile, Microsoft introduced Windows/386 in September 1987 (in
addition to the base (8086) variant of Windows 2.0, which was released
in December 1987), which used 386 protected mode and virtual 8086 mode
(which was missing in the "brain-damaged" (Bill Gates) 286). So
Windows completely ignored 286 protected mode. Windows eventually
became a big success.
>
What I recall is a bit different. IIRC first successful version of
Windows, that is Windows 3.0 had 3 modes of operation: 8086 compatible,
286 protected mode and 386 protected mode. Only later Microsoft
dropped requirement for 8086 compatiblity.
They didn't drop 8086 so much as required 386. Windows and "DOS box"
required the CPU to have "virtual 8086" mode.
I think still later it dropped 286 support.
I know 286 protected mode support continued at least through NT. Not
sure about 2K.
Windows 95 was supposed to be 32-bit, but contained quite a lot
of 16-bit code.
The GUI was 32-bit, the kernel and drivers were 16-bit. Weird, but it
made some hardware interfacing easier.
IIRC system interface to Windows 3.0 and 3.1 was 16-bit and only
later Microsoft released extention allowing 32-bit system calls.
Never programmed 3.0.
3.1 and 3.11 (WfW) had a combination 16/32-bit kernel in which most
device drivers were 16-bit, but the disk driver could be either 16 or
32 bit. In WfW the network stack also was 32-bit and the NIC driver
could be either.
However the GUI in all 3.x versions was 16-bit 286 protected mode.
You could run 32-bit "Win32s" programs (Win32s being a subset of
Win32), but Win32s programs could not use graphics.
I have no information about Windows internals except for some
public statements by Microsoft and other people, but I think
it reasonable to assume that Windows was actually a succesful
example of 8086/286/386 compatibility. That is their 16 bit
code could use real mode segmentation or protected mode
segmentation the later both for 286 and 386. For 32-bit
version they added translation layer to transform arguments
between 16-bit world and 32-bit world. It is possible
that this translation layer involved a lot of effort.
For a number of years I worked on Windows based image processing
systems that used OTS ISA-bus acceleration hardware. The drivers were
16-bit DLLs, and /non-reentrant/. There was one "general" purpose
board and several special purpose boards that could be combined with
the general board in "stacks" that communicated via a private high
speed bus. There could be multiple stacks of boards in the same
system.
[Our most complicated system had 7 boards in 2 stacks, one with 5
boards and the other with 2. Our biggest system had 18 boards: 6
stacks of 3 boards each. Ever see a 20 slot ISA backplane?]
The non-reentrant driver made it difficult to simultaneously control
separate stacks to do different tasks. We created a (reentrant)
32->16 bit dispatching "thunk" DLL to translate calls for every
function of every board that we might possibly want to use ...
hundreds in all ... and then dynamically loaded multiple instances of
the driver as required. PITA !!! Worked fine but very hard to debug,
particularly when doing several different operations simultaneously.
On 3.x we simulated threading in the shared 16-bit application space
using multiple processes, messaging with hidden windows, and "far
call" IPC using the main program as a kind of "shared library". Having
real threads on 95 and later allowed actually consolidating everything
into the same program and (at least initially) made everything easier.
But then NT forced dealing with protected mode interrupts, while at
the same time still using 16-bit drivers for everything else - and
that became yet another PITA.
We continued to use the image hardware until SIMD became fast enough
to compete (circa GHz Pentium4 being available on SBC). Excepting
NT3.x we had systems based on every Windows from 3.1 to NT4.
Anyway, it seems that Windows was at least as tied to 286
as OS/2 when it became sucessful and dropped 286 support
later. And for long time after dropping 286 support
Windows massively used 16-bit segments.
I don't know exactly when 286 protected mode was dropped. I do know
that, at least through NT4, 16-bit DOS mode and GUI applications would
run so long as they relied on system calls and didn't directly try to
touch hardware.
I occasionally needed to run 16-bit VC++ on my NT4 machine.
IIUC Microsoft Windows up to 3.0 and probably everbody who wanted
to say "supported on Windows". That is Windows 3.0 on 286 almost
surely used 286 protected mode and probably run "Windows" programs
in protected mode. But Windows also supported 8086 and Microsoft
guidelines insisted that proper "Windows program" should run on
8086.
Yes. I used - but never programmed - 3.0 on a V20 (8086 clone). It
was painfully slow even with 1MB of RAM.
... Even Intel finally saw the light, as
did everybody else, and nowadays segments are just a bad memory.
>
Well, 16-bit segments clearly are too limited when one has several
megabytes of memory. And consistently 32-bit segmented system
increases memory use which is nontrivial cost. OTOH there is
question how much customers are going to pay for security
features. I think recent times show that secuity has significant
costs. But lack of security may lead to big losses. So
there is no easy choice.
>
Now people talk more about capabilities. AFAICS capabilities
offer more than segments, but are going to have higher cost.
So abstractly, for some systems segments still may look
attractive. OTOH we now understand that software ecosystem
is much more varied than prevalent view in seventies, so
system that fit well to segments are a tiny part.
>
And considering bad memory, do you remember PAE? That had
similar spirit to 8086 segmentation. I guess that due
to bad feeling for segments among programmers (and possibly
more relevant compatiblity troubles) Intel did not extend
this to segments, but spirit was still there.
The bad taste of segments is from exposure to Intel's half-assed
implementation which exposed the segment selector as part of the
address.
Segments /should/ have been implemented similar to the way paging is
done: the program using flat 32-bit addresses and the MMU (SMU?)
consulting some kind of segment "database" [using the term loosely].
Intel had a chance to do it right with the 386, but instead they
doubled down and expanded the existing poor implementation to support
larger segments.
I realize that transistor counts at the time might have made an
on-chip SMU impossible, but ISTM the SMU would have been a very small
component that (if necessary) could have been implemented on-die as a
coprocessor.
<grin>Maybe my de-deuces are wild ...</grin>
but there they are nonetheless.
YMMV.