On 10/17/2024 4:34 PM, EricP wrote:
There is a reference in this Reg article
https://www.theregister.com/2024/10/15/intel_amd_x86_future/
to x86S spec, a proposal from Intel to pare-down the x86/x64
by removing or modifying legacy features.
[PDF] Envisioning a Simplified Intel Architecture
https://www.intel.com/content/www/us/en/developer/articles/technical/ envisioning-future-simplified-architecture.html
Some examples are:
3 Architectural Changes
3.1 Removal of 32-Bit Ring 0
3.2 Removal of Ring 1 and Ring 2
3.3 Removal of 16-Bit and 32-Bit Protected Mode
3.4 Removal of 16-Bit Addressing and Address Size Overrides
3.5 CPUID
3.6 Restricted Subset of Segmentation
3.7 New Checks When Loading Segment Registers
3.7.1 Code and Data Segment Types
3.7.2 System Segment Types (S=0)
3.8 Removal of #SS and #NP Exceptions17
3.9 Fixed Mode Bits
3.9.1 Fixed CR0 Bits
3.9.2 Fixed CR4 Bits
3.9.3 Fixed EFER Bits
3.9.4 Removed RFLAGS
3.9.5 Removed Status Register Instruction
3.9.6 Removal of Ring 3 I/O Port Instructions
3.9.7 Removal of String I/O
Pros:
Technically makes sense for PCs as they are.
Cons:
Looses some of the major aspects of what makes x86 unique;
Doesn't really solve issues for x86-64's longer term survival.
Absent changing to a more sensible encoding scheme and limiting or removing condition-codes, x86-64 still has this major boat anchor. But, these can't be changed without breaking backwards compatibility (at least, assuming hardware that continues running x86-64 as the native hardware ISA).
Though, ironically, most "legacy x86" stuff could probably be served acceptably with emulators.
If it can't maintain a performance advantage (say, if ARM and RISC-V catch up or exceed the performance possible on higher end x86 chips), it is effectively done.
Granted, ARM also has the dead weight that is ALU condition codes; and RISC-V some of its own traditional limitations.
ARM64 would likely beat RV64G in a clock-per-clock sense, but potentially RV64 could be clocked a little faster due to not having to deal with CC's, ...
As I see it, a case could almost be made for going more like the Apple "Rosetta" route, switching to some other ISA (be it ARM or RISC-V or whatever else), and running any existing/legacy software primarily via emulation. Main thing one would need in this case is a decent emulator (JIT or AOT based) and enough helpers to work around some things that are a pain to do efficient in pure software emulation (like twiddle the bits in EFLAGS/RFLAGS based on the result of ALU instructons).
This matters more for end-user use-cases, since:
End users care about backwards compatibility;
Both low-end embedded, and things like webservers, have little need to care about compatibility (so in theory could just jump directly to ARM or RISC-V or whatever).
Not a whole lot of other obvious uses cases where an x86-64 only CPU is an obvious win and would retain a clear advantage over jumping to another ISA.
Going forward, it seems more likely to face competition by "cheap" processors being "good enough" rather than direct competition at the high-end (where x86-64 has traditionally dominated). High-end designs can't really compete as well on the "cheap" end (but a cheaper design may still be competitive if one can have more cores, even if per-thread performance is worse). Seemingly, there isn't much more one can go "up" in terms of single-threaded performance (more a question of if the competition can play "catch up").
They could possibly hold on by also jettisoning x86-64 as the native ISA, and coming up with something that can allow things to be more competitive at lower cost. But, replacing it doesn't really "save" it either.
Say:
Switch over to a less terrible encoding scheme;
Limit the use of (if not eliminate) the use of condition codes in the native ISA (say, CC's mostly existing in the form of helper machinery to make emulation faster);
Could maybe offload x86 compatibility to firmware (say, the EFI BIOS provides a hardware-optimized JIT compiler).
If the new ISA were tuned towards efficiently emulating x86-64, while also being cheaper, it could still hold an advantage.
Say, if one could make the CPU itself have 35% more perf/W by jumping to a different encoding scheme, this could easily offset if they needed to pay a 20% cost by JIT compiling everything when running legacy software...
Granted, this is predicated on the assumption that one could get such a jump by jumping to a different encoding scheme.
In many other cases, even if emulation is slower than it might have been to run the code natively, it may not matter that much.
Say, for example, one can run WinXP in QEMU on an Android phone and then proceed to play Diablo2 or similar. In these cases, the limiting factor may be more that the UI experience sucks, rather than the potentially significant performance overhead of running WinXP in QEMU on a smartphone...
The major selling point of x86 has been its backwards compatibility, but this advantage may be weakening with the rise of the ability to emulate stuff at near native performance. If Windows could jump ship and provide an experience that "doesn't suck" (fast/reliable/transparent emulation of existing software), the main advantages of the x86-64 legacy may go away (and is already mostly moot in Linux since the distros typically recompile everything from source, with little real/significant ties to the x86 legacy).
This situation may itself change if MS continues trying to shoot themselves in the foot (eg, making Win11 bad enough to where people are more tempted to jump over to Linux when Win10 becomes no longer usable). Theoretically, it being more in MS interests to make Windows not suck (rather than trying to force crap on people and make the Windows experience kinda suck...).
...