On 10/21/2024 7:03 PM, MitchAlsup1 wrote:
On Mon, 21 Oct 2024 22:02:27 +0000, BGB wrote:
On 10/17/2024 4:34 PM, EricP wrote:
>
Pros:
Technically makes sense for PCs as they are.
Cons:
Looses some of the major aspects of what makes x86 unique;
Doesn't really solve issues for x86-64's longer term survival.
>
x86's long term survival depends on things out of AMD's and Intel's
hands. It depends on high volume access to devices people will buy
new every year or every other year. A PC is not such a thing, while
a cell phone seems to be.
Worked better for them when PCs kept getting faster.
Then there was more reason to want to buy new PCs and new CPUs;
When it is "just because" or planned obsolescence, this isn't so good.
Not so much when a ~ 7 year old CPU model is nearly as fast as its newer equivalents.
Issue then isn't so much one of speed, so much as some newer software being like "not gonna run on that". Then, Win11 is also like, "Nope, not gonna run on that"...
Theoretically, the CPU could work, but the MOBO lacks a TPM, and also the long standing "virtualization doesn't work for whatever reason" issue (can enable in BIOS, still doesn't work, ...).
Then VirtualBox and HyperV and similar, "Nope".
QEMU and DOSBox are still happy enough though...
Apparently there are ways to force install Win11 without a TPM, but then apparently Windows Update and similar refuses to work.
Win10 still good enough for now, what next?... Dunno.
Still better than the whole Apple / iPhone thing, with the apparent practice of remotely throttling performance and then (ultimately) sending a kill-switch signal once the devices get old enough.
Well, then with Android, it lasts until "Google Play" or similar stops working (well, or on an older device, "Android Market").
Like, say, the usefulness of an Android 2.1 device being more limited by the non-functional "Android Market" than by the performance of the hardware. Meanwhile, a Windows Vista era laptop is at least still technically usable (well, can still technically use my XP era laptop as well, but at this point have to custom-build software for it via VS2008 or Platform SDK v6.1; and in terms of performance it generally loses to a RasPi).
Then again (from long ago), I have memories of before messing with a 90s era laptop that basically failed at trying to run Quake 2 and Half-Life. IIRC, it was running Win 98, but was hard pressed to run much newer than Doom or similar on it.
Decided to leave out some stuff, but digging around on the internet, looks like the closest match I can find to what I remember seems to be the ThinkPad 365E or 365X (had 3.5" floppy drive and parallel port, did not have CD-ROM or USB; had a display that did color but was kinda awful at it, ...).
I think parents got rid of it, but I guess by that point it was kinda useless (and to get files onto it, one either needed to use floppies or copy them via HyperTerm and a Null-Modem cable).
It was at least capable of launching Quake, but its performance was pretty much unusable. A lot of newer software at the time would just immediately crash (that time being roughly in the XP era).
The XP era laptop is getting kinda unusable at this point, but I am half-wondering if an SDcard to laptop-PATA adapter could be an improvement (vs an otherwise annoyingly slow 20GB HDD; like if I could get a 64GB or 128GB SDcard to work, this would be a lot more space).
But, probably not going to go much bigger than 128GB, as I seem to remember WinXP having a problem with drives over 128GB.
If I do so, might almost make sense to try jumping from WinXP to a 32-bit Linux distro (would just need to find something that can run on a laptop from 2003).
...
But, I guess, granted, they would sell more CPUs if people bought new stuff more often (well, and bought the newest generation parts, rather than older/cheaper parts). But, then again, not like I have infinite money, so...
>
Absent changing to a more sensible encoding scheme and limiting or
removing condition-codes, x86-64 still has this major boat anchor. But,
these can't be changed without breaking backwards compatibility (at
least, assuming hardware that continues running x86-64 as the native
hardware ISA).
Condition codes were never "that hard" of a problem wither in
pipelining nor in operand routing.
It seems, they create a path where each ALU instruction may potentially depend on the prior ALU instruction, and where instructions like Jcc need these bits immediately following an ALU instruction, ...
Could be be better if, say:
CC's didn't exist;
CC's are *only* updated by instructions like CMP and similar.
If no CC's, ALU instructions have no implicit dependency and could be evaluated in any order without a visible effect on state.
For a past emulator, did note though that a lot of the CC logic could be skipped by noting cases where a following instruction would fully mask the CC updates from a prior instruction. This is possibly asking a bit much from hardware though...
While my use of a T bit could be argued to be "similar" to CC's, it is different:
T bit may only be updated in certain contexts;
I was able to get by with a 2 cycle latency between updating the T bit and any instructions which use the T bit;
A similar sort of 2-cycle latency constraint for x86-64 rFLAGS would likely have an adverse effect on performance.
>
Though, ironically, most "legacy x86" stuff could probably be served
acceptably with emulators.
>
Every try to emulate A24 ? Address bit 24--when we looked at it, it took
more gates to remove it and put a bit in CPUID so applications could "do
the right thing" than to simply leave the functionality there.
My past x86 emulator attempts were limited mostly to 32-bit user-mode stuff, so no A20 or A24 wonk or similar (was at the time mostly trying to get simple 32-bit Windows programs working).
If I were to to try to emulate a full machine, would likely switch out the memory load/store handling logic (as function pointers) based on the value of relevant architectural registers (such as whether paging is enabled or disabled).
Most recent efforts to write an x86 emulator have fizzled relatively quickly though; mostly for concerns that I wouldn't get enough performance to make it worthwhile (and emulating x86 on my x86 PC wouldn't be terribly useful, and on RasPi there is also QEMU and DOSBox, even if the performance sucks).
>
If it can't maintain a performance advantage (say, if ARM and RISC-V
catch up or exceed the performance possible on higher end x86 chips), it
is effectively done.
>
x86 performance advantage has ALWAYS been in the cubic amounts of cash
flow running through the FAB to pay the engineering team budgets.
Recent years have mostly been model numbers advancing faster than any single threaded performance improvements...
And ARM is catching up.
Seemingly, the RISC-V chips are a bit further behind, but seem to be advancing up the ladder rather quickly.
Or, "How about bigger AVX?", goes back and forth, AND apparently supporting AVX512 via the cheaper mechanism of doing the operations as multiple parts.
Where, seemingly SIMD going too much wider than 128 bits actually makes stuff worse...
Pretty much my entire adult life, there hasn't been much obvious gain from SIMD going wider than 128 bits, I am inclined to posit that 128 bits is probably near optimal.
And, the advantage of SIMD lies more with subdividing the registers into N elements (without increasing pipeline or register width), rather than trying to gain more elements by pushing registers to bigger sizes.
Personally, I also have a lot more use cases for 4-wide vectors of 16-bit elements than I do for 256 bit vectors.