Re: Computer architects leaving Intel...

Liste des GroupesRevenir à c arch 
Sujet : Re: Computer architects leaving Intel...
De : cr88192 (at) *nospam* gmail.com (BGB)
Groupes : comp.arch
Date : 28. Aug 2024, 22:25:47
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vao14g$3jt75$1@dont-email.me>
References : 1 2 3 4 5 6
User-Agent : Mozilla Thunderbird
On 8/28/2024 1:55 AM, Robert Finch wrote:
On 2024-08-27 11:33 p.m., BGB wrote:
On 8/27/2024 6:50 PM, MitchAlsup1 wrote:
On Tue, 27 Aug 2024 22:39:02 +0000, BGB wrote:
>
On 8/27/2024 2:59 PM, John Dallman wrote:
In article <vajo7i$2s028$1@dont-email.me>, tkoenig@netcologne.de (Thomas
Koenig) wrote:
>
Just read that some architects are leaving Intel and doing their own
startup, apparently aiming to develop RISC-V cores of all things.
>
They're presumably intending to develop high-performance cores, since
they have substantial experience in doing that for x86-64. The question
is if demand for those will develop.
>
>
Making RISC-V "not suck" in terms of performance will probably at least
be easier than making x86-64 "not suck".
>
Yet, these people have decades of experience building complex things
that
made x86 (also() not suck. They should have the "drawing power" to get
more people with similar experiences.
>
The drawback is that they are competing with "everyone else in
RISC-V-land,
and starting several years late.
>
Though, if anything, they probably have the experience to know how to make things like the fabled "opcode fusion" work without burning too many resources.
>
>
>
Android is apparently waiting for a new RISC-V instruction set
extension; >> you can run various Linuxes, but I have not heard
about anyone wanting to do so on a large scale.
>
>
My thoughts for "major missing features" is still:
Needs register-indexed load;
Needs an intermediate size constant load (such as 17-bit sign extended)
in a 32-bit op.
>
Full access to constants.
>
>
That would be better, but is unlikely within the existing encoding constraints.
>
But, say, if one burned one of the remaining unused "OP Rd, Rs, Imm12s" encodings as an Imm17s, well then...
>
There were a few holes in this space. Like, for example, there are no ANDW/ORW/XORW ops with Imm12s, so these spots could be reclaimed and used for such a purpose, treating the Imm12 and Rs as a combined 17- bit field.
>
>
But, arguably, LUI+ADD, or LUI+ADD+LUI+ADD+SLLI+ADD, may not matter as much if one can afford the pattern-matching logic to turn 2 (or 6) operations into a fused operation...
>
>
Where, there is a sizeable chunk of constants between 12 and 17 bits,
but not quite as many between 17 and 32 (and 32-64 bits is comparably
infrequent).
>
Except in in "math codes".
>
But 64-bit memory reference displacements means one does not have to
even bother to have a strategy of what to do when you need a single
FORTRAN common block to be 74GB in size in order to run 5-decade old
FEM codes.
>
>
I don't assume that RISC-V would be getting a 64-bit FPU immediate anytime soon.
>
>
I could also make a case for an instruction to load a Binary16 value and
convert to Binary32 or Binary64 in an FPR, but this is arguably a bit
niche (but, would still beat out using a memory load).
>
Most of these are covered by something like::
>
     CVTSD   Rd,#1     // 32-bit instruction
>
>
My case, I have:
   FLDCH Imm16f, Rn  //also a 32-bit instruction
Which can cover a significant majority of typical FP constants.
>
>
In RISC-V, one needs to use a memory load, and store in memory using the full 64-bits if one needs the value as "double". This kinda sucks.
>
Though, arguably still not as bad as it was on SH-4 (where constant loading in general was a PITA; and loading a FP constant typically involved multiple memory loads, and an address generation).
>
Eg:
   MOVA    @(PC, Disp8), R3
   FMOV.S  @R3+, FR5
   FMOV.S  @R3+, FR4
AKA: Suck...
>
>
>
Big annoying thing with it, is that to have any hope of adoption, one
needs an "actually involved" party to add it. There doesn't seem to be
any sort of aggregated list of "known in-use" opcodes, or any real
mechanism for "informal" extensions.
>
With the OpCode space already 98% filled there does not need to
be such a list.
>
>
One would still need it if multiple parties want to be able to define an extension independently of each other and not step on the same encodings.
>
>
Well, or it becomes like the file-extension space where there are seemingly pretty much no unused 2 or 3 letter filename extensions.
>
So, for some recent formats I went and used ".GTF" and ".UPI", which while not unused, were not used by anything I had reason to care about (medical research and banks).
>
>
Though, with file extensions and names, at least one can web-search them (which is more than one can do to check whether or not a part of the RISC-V opcode map is used by a 3rd party extension).
>
What provisions have been made, don't scale much beyond "specific SoC provides extensions within a block generally provisioned for SoC specific extensions".
>
>
The closest we have on the latter point is the "Composable Extensions"
extension by Jan Gray, which seems to be mostly that part of the ISA's
encoding space can be banked out based on a CSR or similar.
>
>
Though, bigger immediate values and register-indexed loads do arguably
better belong in the base ISA encoding space.
>
Agreed, but there is so much more.
>
     FCMP    Rt,#14,R19        // 32-bit instruction
     ENTER   R16,R0,#400       // 32-bit instruction
..
>
>
These are likely a bit further down the priority list.
>
High priority cases would likely be things that happen often enough to significantly effect performance.
>
>
As I see it, array loads/stores, and integer constant values in the 12-17 bit range, are common enough to justify this.
>
>
Prolog/Epilog happens once per function, and often may be skipped for small leaf functions, so seems like a lower priority. More so, if one lacks a good way to optimize it much beyond the sequence of load/store ops which is would be replacing (and maybe not a way to do it much faster than however can be moved in a single clock cycle with the available register ports).
>
>
>
At present, I am still on the fence about whether or not to support the
C extension in RISC-V mode in the BJX2 Core, mostly because the encoding
scheme just sucks bad enough that I don't really want to deal with it.
>
>
Realistically, can't likely expect anyone else to adopt BJX2 though.
>
Captain Obvious strikes again.
>
>
This is likely the fate of nearly every hobby class ISA.
>
>
Like, there is seemingly pretty much nothing one can do that other people haven't done already, and often better. It then becomes a question of if it can be personally interesting or maybe useful.
>
Like, even when I am beating RISC-V in terms of performance, it is usually only by 20%-60%, with other cases being closer to break even.
>
>
And, the only times it really pulls strongly ahead are when I try to use it more like a GPU than as a CPU. If anything, it makes more sense for me to try using it like a GPU or NPU ISA, and then leaving CPU stuff more to RISC-V (where people are more likely to care about things like GCC support; and commercially available CPU ASICs).
>
>
And like, even at this sort of task, a BJX2 core running on an FPGA isn't exactly going to be able to match something like an NVIDIA Jetson at running TensorFlow models (and also the Jetson Nano is cheaper than a Nexys A7, ...).
>
>
And, most of the nets I can run, are mostly multilayer perceptrons; as that much bigger than perceptron style nets are too big/slow to be processed in any reasonable amount of time.
>
Being able to compete performance wise at these tasks with a 2003 era laptop or a 700 MHz ARM11 based RasPi, likely doesn't count for much.
The 2003 laptop has x87; The ARM11 theoretically has a decent FPU ISA (VFP2), just FPU performance on the original RasPi seems to be unexpectedly weak.
>
The RasPi is around 9x faster than the BJX2 core at Dhrystone, but this is within the expected margin of error (though slightly less than the 14x clock-speed difference).
>
>
Does at least significantly beat out doing it with RV64G though (at 50MHz), as the lack of "dense FP-SIMD" effectively makes RV64G entirely non-competitive at this task.
>
But, most normal C code, isn't going to make much use of SIMD.
Well, even as much as compilers try to use SIMD, its automatic vectorization is at best "weak", often still falling well short of manually writing SIMD code (but, with the lack of any "common denominator" that works transparently across targets and compilers).
>
>
...
>
>
>
>
Though, bigger issue might be how to make it able to access hardware
devices (seems like part of the physical address space is used for as a
PCI Config space, and would need to figure out what sorts of devices the
Linux kernel expects to be there in such a scenario).
>
It is reasons like this that cause My 66000 to have four 64-bit address
spaces {DRAM, MMI/O, configuration, ROM}. PCIe MMI/O space can easily
exceed 42-bits before one throws MR-IOV at the problem. Configuration
headers in My 66000 contain all the information CPUID has in x86-land.
>
Presumably, one would mimic the memory map of whatever SiFive device one is claiming to be for sake of Linux Kernel compatibility. From what I could gather, not all of them have the same physical memory map (and it doesn't seem well documented).
>
Then, one has to know what hardware interfaces one needs to support (there is likely to be a specific hardware list for a particular SoC that the kernel would be built to expect).
>
Well, or go the route of trying to build the kernel themselves, and then figuring out which drivers Linux supports and which would be easiest to implement hardware interfaces for, ...
>
>
>
Though, at present, for my project it would probably be less effort to make TestKern fake the Linux syscalls for RISC-V mode than to make the BJX2 core pretend to be a SiFive SoC.
>
But, then again, the bigger problem (for practical use) isn't so much its lack of ability to run Linux software, but that it only runs at 50 MHz (and has nowhere enough "go fast" magic to make 50 MHz "not slow").
>
...
>
>
A bit much to expect an low-cost FPGA to run anything very fast. Performance is not everything though. Industry maturing?
 
There are apparently some things FPGAs do very well:
Software defined radio: Whole thing of toggling pins fast enough to modulate/demodulate things;
Spiking neural nets: You can map the NN to state machines and FF's, and there are few "other" ways to handle spiking neural nets efficiently.
Sadly, fast CPU core isn't so easily pulled off (more so with FPGAs at the slowest speed grade).

Going with a variable length instruction set for the latest design, even though I touted there were issues which such in a FPGA.  It supports 6, 14, and 30-bit immediates for many instructions. The exceptions are the load immediate, which can handle 8,16,32, and 64-bit constants. Compares can also handle 8,16,32, and 64-bit constants. That includes float loads and compares too. The design is a bit more complex than a RISC machine. It has scaled-indexed-addressing, and variable sized displacement addressing too (6, 14, and 22-bit).
 
I ended up mostly with:
   5/6, 9/10, 10/12, 17, 33, 57/64
Why 17 is magical:
   Expresses both Int16 and UInt16 range;
Why 33 is magical:
   Expresses both Int32 and UInt32 range;
5/6: Same size as a register field;
9/10: Register field plus 4 bits.
10/12: Two register fields glued together.
The hit rate improvement of going from 5 to 9 was more than from going from 9 to 12.
Similarly, the step from 15->16->17 are fairly significant, but 17->18 gains very little. Mostly has to do with the high prevalence of 16-bit magic numbers (so, if one covers -32768 .. 65535, one covers the entire 16-bit range, and all its associated magic numbers; much rarer to see values just outside the 16-bit range though).
The 33 size is similar.
Though, this does not apply to branches, which don't obey the same "magic numbers" rules.
But, in general, in many other cases it seems immediate fields of '(2^n)+1' bits seems to be points of "unusually good" hit-rate.
Though, for 5 and 9 bits, the "local maximum" is for unsigned values, whereas for 17 and 33 it is for signed.
Say, 5u gets a higher hit rate than 5s, but 6s often beats 6u. Similar for 9u vs 9s, and 10u and 10s, but there is some variability here.
For normal load/store displacements, the magic value seems to be ~ 4kB.
For function calls, ~ 1MB seems fairly good (realistically, as long as it is larger than the size of ".text" it is good).
For local branches, ~ 8 or 16kB (this is a guess, but seems to be larger than 4K but less than 64K).
Where, say, most structs and stack-frames are smaller than 4K, and most functions are smaller than ~ 16K.
But, looks like to get it to ~ 100%, one can go for 8MB function-calls and 64K local branches.
When one crosses the 4K range for Load/Store, then it becomes better to be able to support negative displacements, than to support 8K, at least in my testing.
Though, one wants a bigger limit for a global pointer.
I eventually ended up adding an instruction that did:
   LEA.Q (GBR, Disp16), Rn
Mostly because this gives 512K of reach across the '.text' and '.data' sections, and could reach most global arrays (albeit with the tradeoff of needing to align global arrays to at least an 8-byte boundary).

The project includes an i386 compatible core, which is progressing along. Lots of fun has been had sorting out various bugs. The goal is to incorporate multiple dissimilar (legacy) cores in the same system. Working out the details of an architecture call instruction.
 
It seems i386 can be done, but making an i386 implementation in an FPGA that doesn't perform horribly seems like another matter.
"Can run Doom poorly, but sorta playable" is something, but still not great. AFAIK, this is the general limit of i386 on FPGAs at present.

I have thought that RISC-V is decent, although missing some of the more CISC-like features like scaled-indexed-addressing. I discovered a while ago I am a bit of a fan of the obscene bordering on the beautiful. Uglier cores are more interesting.
 
By my current estimates, some of the limitations RISC-V imposes has around a 40% overhead in scalar performance.
As noted, most of this is concentrated in a few areas.
For SIMD heavy code, the delta is higher, but OTOH RISC-V also has the 'V' extension, which could change matters. Downside is that 'V' seems to needs dedicated vector registers, and is some sort of weirdly designed variable-length SIMD.
As-is, there would be no way to directly support the 'V' extension on the BJX2 core short of expanding it internally to 128 GPRs (or adding a separate register space for the 'V' registers). And, if I did, would likely just end up limiting the vectors to 128 bits or similar.
The extension is kinda weird in that it puts a lot of the information that would normally be encoded into the instruction into CSRs and uses another instruction to set the requested vector layout. This saves encoding space, but does mean that one can effectively only use a single vector type at a time, and will need to reset the vector layout any time such code is reached and it isn't in a known state (and probably reset the state following a function call if the relevant CSRs aren't saved as part of the ABI, ...).
Well and it also looks like, if one supports both 128-bit vectors and Binary16, they will also need to support 8x Binary16 in a vector (my current SIMD unit being 4x FP16|FP32).
Well, and a whole lot of other stuff...
But, yeah, if I were to come up with a new high-level ISA design, would likely prioritize:
   32/64/96 bit VLE;
   32 or 64 GPRs;
   (9|10)/17/33/64 bit immediate sizes;
   Likely has predicated instructions.
By partial necessity:
   LdOp and OpSt for basic ALU instructions.
Addressing modes:
   (Rb, Disp): High priority
   (Rb, Ri): Fixed-scale, Moderately high priority
   (Rb, Ri*Sc, Disp): Variable Scale, Lower priority.
May drop the WEX idea, in favor of superscalar.
Although superscalar asks more of the CPU, it does have the merit of freeing up some encoding space and allowing for better binary compatibility.
Maybe, possible thought:
   ppZZ-ZZZZ-ZZnn-nnnn ssss-ssZZ-ZZZZ-ZZZZ  //2R (Src, Dst)
   ppZZ-ZZZZ-ZZnn-nnnn ssss-sstt-tttt-ZZZZ  //3R
   ppZZ-ZZZZ-ZZnn-nnnn ssss-ssii-iiii-iiii  //3RI (Imm10)
   ppZZ-ZZZZ-Zinn-nnnn iiii-iiii-iiii-iiii  //2RI (Imm17)
   ppZZ-ZZZZ-Ziii-iiii iiii-iiii-iiii-iiii  //Imm23 (Branch)
64b:
   01ZZ-ZZZZ-Ziii-iiii iiii-iiii-iiii-iiii  //J23
   ppZZ-ZZnn-nnnn-ZZZZ ssss-ssii-iiii-iiii  //3RI (Imm33)
   01ZZ-ZZZZ-ZZZZ-ZZZZ ZZZZ-ZZuu-uuuu-ZZZZ  //J23
   ppZZ-ZZnn-nnnn-ZZZZ ssss-sstt-tttt-ZZZZ  //4R (Special 3R)
   01ZZ-ZZZZ-ZiZZ-ZZZZ iiii-iiii-iiii-iiii
   ppZZ-ZZnn-nnnn-ZZZZ ssss-ssZZ-ZZZZ-ZZZZ  //3RI Imm17s (Special 3R)
   01ZZ-ZZZZ-ZiZZ-ZZZZ iiii-iiii-iiii-iiii
   ppZZ-ZZnn-nnnn-ZZZZ ssss-sstt-tttt-ZZZZ  //4RI Imm17s (Special 3R)
     (Likely Ld/St)
96b:
   01ZZ-Ziii-iiii-iiii iiii-iiii-iiii-iiii  //J27
   01ZZ-Ziii-iiii-iiii iiii-iiii-iiii-iiii  //J27
   ppZZ-ZZnn-nnnn-ZZZZ ssss-ssii-iiii-iiii  //3RI (Imm64)
   ...
pp:
   00=Normal 32-bit op
   01=64/96 bit ops (J-Prefix).
   10=Predicated True
   11=Predicated False
Likely the J-Prefixes would have their own encoding spaces, which mostly define how they combine-with or extend the following instruction. Additionally, they may provide additional opcode space.
It is possible that the high 3 opcode bits encode a Config Block, say:
   00z: 3R ops (10b opcode)
   010: 3RI ops, ALU (5b opcode, Imm10u)
   011: 3RI ops, Ld/St (5b opcode, Imm10s)
   10z: ?
   111: 2RI Imm17 | Imm23 (or J23/J27, 4b opcode)
Likely layout for Imm17 block:
   0000: MOV (Load Imm17s to Reg)
   0001: ADD (Imm17s)
   0010: SHORI & FLDCH (Imm16u)
   0011: -
   ...
   1110: BRA (Imm23)
   1111: BSR (Imm23)
There is no opcode field if interpreted as a J27 prefix.
As a J23 prefix, could give additional opcode or combiner metadata.
   Possibly, most of the rest of the prefix space could be reserved.
Say:
   J23_0000 + 3RI: Expand Imm10 to Imm33
   J23_0000 + 3R: 3RI Imm23s (Imm replaces Rt)
   J23_0001 + 3R: 4RI Imm23s (Imm = Special)
   J23_0010 + 3R: 3RI Imm17s (Imm replaces Rt, + 6 opcode bits)
   J23_0011 + 3R: 4RI Imm17s (+ 6 opcode bits)
   J23_0100 + 3R: 4R
Internally, one can assume that ops map to a 3R1W space:
   Rs, Rt, Ru, Rn
Where, for 3R ops, Ru is aliased to Rn or ZR.
   Load:    Rs, Rt, ZR, Rn
   Store:   Rs, Rt, Rn, ZR
   Generic: Rs, Rt, Rn, Rn
In normal cases, Imm replaces Rt.
   Load:    Rs, Imm, ZR, Rn
   Store:   Rs, Imm, Rn, ZR
   Generic: Rs, Imm, Rn, Rn
Primary branch-types would be:
   CMPcc Rs,Rt + BT/BF
   Bcc Rs, Disp  (Rs CMP ZR)
With:
   Bcc Rs, Rt, Disp
Likely relegated to a 64-bit encoding.
...

 

Date Sujet#  Auteur
27 Aug 24 * Computer architects leaving Intel...529Thomas Koenig
27 Aug 24 +- Re: Computer architects leaving Intel...1Michael S
27 Aug 24 +- Re: Computer architects leaving Intel...1Stephen Fuld
27 Aug 24 `* Re: Computer architects leaving Intel...526John Dallman
28 Aug 24  +* Re: Computer architects leaving Intel...519BGB
28 Aug 24  i`* Re: Computer architects leaving Intel...518MitchAlsup1
28 Aug 24  i `* Re: Computer architects leaving Intel...517BGB
28 Aug 24  i  +* Re: Computer architects leaving Intel...2Robert Finch
28 Aug 24  i  i`- Re: Computer architects leaving Intel...1BGB
28 Aug 24  i  `* Re: Computer architects leaving Intel...514MitchAlsup1
29 Aug 24  i   `* Re: Computer architects leaving Intel...513BGB
29 Aug 24  i    +* Re: Computer architects leaving Intel...501MitchAlsup1
29 Aug 24  i    i`* Re: Computer architects leaving Intel...500BGB
30 Aug 24  i    i +* Re: Computer architects leaving Intel...489John Dallman
30 Aug 24  i    i i+* Re: Computer architects leaving Intel...11Thomas Koenig
30 Aug 24  i    i ii+- Re: Computer architects leaving Intel...1Michael S
30 Aug 24  i    i ii+* Re: Computer architects leaving Intel...8Anton Ertl
30 Aug 24  i    i iii+* Re: Computer architects leaving Intel...2Michael S
30 Aug 24  i    i iiii`- Re: Computer architects leaving Intel...1Anton Ertl
30 Aug 24  i    i iii`* Re: Computer architects leaving Intel...5John Dallman
30 Aug 24  i    i iii `* Re: Computer architects leaving Intel...4Brett
30 Aug 24  i    i iii  +- Re: Computer architects leaving Intel...1John Dallman
2 Sep 24  i    i iii  `* Re: Computer architects leaving Intel...2Terje Mathisen
2 Sep 24  i    i iii   `- Re: Computer architects leaving Intel...1Thomas Koenig
30 Aug 24  i    i ii`- Re: Computer architects leaving Intel...1BGB
30 Aug 24  i    i i`* Re: Computer architects leaving Intel...477Anton Ertl
30 Aug 24  i    i i +* Re: Computer architects leaving Intel...301John Dallman
30 Aug 24  i    i i i`* Re: Computer architects leaving Intel...300David Brown
30 Aug 24  i    i i i +* Re: Computer architects leaving Intel...292Anton Ertl
30 Aug 24  i    i i i i`* Re: Computer architects leaving Intel...291Bernd Linsel
31 Aug 24  i    i i i i +- Re: Computer architects leaving Intel...1Thomas Koenig
31 Aug 24  i    i i i i `* Re: Computer architects leaving Intel...289Thomas Koenig
31 Aug 24  i    i i i i  +- Re: Computer architects leaving Intel...1Thomas Koenig
31 Aug 24  i    i i i i  `* Re: Computer architects leaving Intel...287Bernd Linsel
31 Aug 24  i    i i i i   +- Re: Computer architects leaving Intel...1Thomas Koenig
31 Aug 24  i    i i i i   +* Re: Computer architects leaving Intel...2Thomas Koenig
31 Aug 24  i    i i i i   i`- Re: Computer architects leaving Intel...1Bernd Linsel
31 Aug 24  i    i i i i   `* Re: Computer architects leaving Intel...283Anton Ertl
31 Aug 24  i    i i i i    +* Re: Computer architects leaving Intel...278Thomas Koenig
31 Aug 24  i    i i i i    i+* Re: Computer architects leaving Intel...157Bernd Linsel
31 Aug 24  i    i i i i    ii+* Re: Computer architects leaving Intel...153MitchAlsup1
1 Sep 24  i    i i i i    iii`* Re: Computer architects leaving Intel...152Stephen Fuld
2 Sep 24  i    i i i i    iii `* Re: Computer architects leaving Intel...151Terje Mathisen
2 Sep 24  i    i i i i    iii  `* Re: Computer architects leaving Intel...150Stephen Fuld
3 Sep 24  i    i i i i    iii   +* Re: Computer architects leaving Intel...139David Brown
3 Sep 24  i    i i i i    iii   i+* Re: Computer architects leaving Intel...108Stephen Fuld
4 Sep 24  i    i i i i    iii   ii`* Re: Computer architects leaving Intel...107David Brown
4 Sep 24  i    i i i i    iii   ii +* Re: Computer architects leaving Intel...103Terje Mathisen
4 Sep 24  i    i i i i    iii   ii i+* Re: Computer architects leaving Intel...101David Brown
4 Sep 24  i    i i i i    iii   ii ii+* Re: Computer architects leaving Intel...97jseigh
4 Sep 24  i    i i i i    iii   ii iii`* Re: Computer architects leaving Intel...96David Brown
4 Sep 24  i    i i i i    iii   ii iii `* Re: Computer architects leaving Intel...95Brett
4 Sep 24  i    i i i i    iii   ii iii  +- Re: Computer architects leaving Intel...1Thomas Koenig
4 Sep 24  i    i i i i    iii   ii iii  +- Re: Computer architects leaving Intel...1MitchAlsup1
5 Sep 24  i    i i i i    iii   ii iii  +* Re: Computer architects leaving Intel...8BGB
5 Sep 24  i    i i i i    iii   ii iii  i`* Re: Computer architects leaving Intel...7MitchAlsup1
5 Sep 24  i    i i i i    iii   ii iii  i `* Re: Computer architects leaving Intel...6David Brown
5 Sep 24  i    i i i i    iii   ii iii  i  `* Re: Computer architects leaving Intel...5Niklas Holsti
5 Sep 24  i    i i i i    iii   ii iii  i   `* Re: Computer architects leaving Intel...4David Brown
6 Sep 24  i    i i i i    iii   ii iii  i    `* Re: Computer architects leaving Intel...3BGB
6 Sep 24  i    i i i i    iii   ii iii  i     `* Re: Computer architects leaving Intel...2David Brown
9 Sep 24  i    i i i i    iii   ii iii  i      `- Re: Computer architects leaving Intel...1BGB
5 Sep 24  i    i i i i    iii   ii iii  +* Re: Computer architects leaving Intel...83David Brown
5 Sep 24  i    i i i i    iii   ii iii  i`* Re: Computer architects leaving Intel...82Terje Mathisen
5 Sep 24  i    i i i i    iii   ii iii  i +* Re: Computer architects leaving Intel...79David Brown
5 Sep 24  i    i i i i    iii   ii iii  i i+* Re: Computer architects leaving Intel...2Thomas Koenig
7 Sep 24  i    i i i i    iii   ii iii  i ii`- Re: Computer architects leaving Intel...1Tim Rentsch
5 Sep 24  i    i i i i    iii   ii iii  i i+* Re: Computer architects leaving Intel...74Terje Mathisen
5 Sep 24  i    i i i i    iii   ii iii  i ii+* Re: Computer architects leaving Intel...16David Brown
9 Sep 24  i    i i i i    iii   ii iii  i iii`* Re: Computer architects leaving Intel...15Terje Mathisen
9 Sep 24  i    i i i i    iii   ii iii  i iii +* Re: Computer architects leaving Intel...12David Brown
9 Sep 24  i    i i i i    iii   ii iii  i iii i`* Re: Computer architects leaving Intel...11Brett
10 Sep 24  i    i i i i    iii   ii iii  i iii i +* Re: Computer architects leaving Intel...5Terje Mathisen
10 Sep 24  i    i i i i    iii   ii iii  i iii i i`* Re: Computer architects leaving Intel...4Brett
10 Sep 24  i    i i i i    iii   ii iii  i iii i i +* Re: Computer architects leaving Intel...2Michael S
11 Sep 24  i    i i i i    iii   ii iii  i iii i i i`- Re: Computer architects leaving Intel...1Brett
11 Sep 24  i    i i i i    iii   ii iii  i iii i i `- Re: Computer architects leaving Intel...1Terje Mathisen
10 Sep 24  i    i i i i    iii   ii iii  i iii i `* Re: Computer architects leaving Intel...5David Brown
10 Sep 24  i    i i i i    iii   ii iii  i iii i  +* Re: Computer architects leaving Intel...3Anton Ertl
10 Sep 24  i    i i i i    iii   ii iii  i iii i  i`* Re: Computer architects leaving Intel...2David Brown
10 Sep 24  i    i i i i    iii   ii iii  i iii i  i `- Re: Computer architects leaving Intel...1Stefan Monnier
10 Sep 24  i    i i i i    iii   ii iii  i iii i  `- Re: Computer architects leaving Intel...1BGB
9 Sep 24  i    i i i i    iii   ii iii  i iii `* Re: Computer architects leaving Intel...2Michael S
10 Sep 24  i    i i i i    iii   ii iii  i iii  `- Re: Computer architects leaving Intel...1Michael S
5 Sep 24  i    i i i i    iii   ii iii  i ii+* Re: Computer architects leaving Intel...45Bernd Linsel
6 Sep 24  i    i i i i    iii   ii iii  i iii+- Re: Computer architects leaving Intel...1David Brown
9 Sep 24  i    i i i i    iii   ii iii  i iii+* Re: Computer architects leaving Intel...2Terje Mathisen
9 Sep 24  i    i i i i    iii   ii iii  i iiii`- Re: Computer architects leaving Intel...1Tim Rentsch
14 Sep15:08  i    i i i i    iii   ii iii  i iii`* Re: Computer architects leaving Intel...41Kent Dickey
14 Sep15:26  i    i i i i    iii   ii iii  i iii +* Re: Computer architects leaving Intel...32Anton Ertl
14 Sep21:11  i    i i i i    iii   ii iii  i iii i+* Re: Computer architects leaving Intel...29MitchAlsup1
14 Sep21:26  i    i i i i    iii   ii iii  i iii ii`* Re: Computer architects leaving Intel...28Thomas Koenig
15 Sep17:50  i    i i i i    iii   ii iii  i iii ii `* Re: Computer architects leaving Intel...27David Brown
16 Sep09:17  i    i i i i    iii   ii iii  i iii ii  +* Re: Computer architects leaving Intel...5Thomas Koenig
16 Sep14:45  i    i i i i    iii   ii iii  i iii ii  i`* Re: Computer architects leaving Intel...4David Brown
16 Sep22:15  i    i i i i    iii   ii iii  i iii ii  i `* Re: Computer architects leaving Intel...3Thomas Koenig
17 Sep03:49  i    i i i i    iii   ii iii  i iii ii  i  +- Re: Upwards and downwards compatible, Computer architects leaving Intel...1John Levine
17 Sep11:15  i    i i i i    iii   ii iii  i iii ii  i  `- Re: Computer architects leaving Intel...1David Brown
16 Sep10:37  i    i i i i    iii   ii iii  i iii ii  `* Re: Computer architects leaving Intel...21Terje Mathisen
16 Sep14:48  i    i i i i    iii   ii iii  i iii ii   `* Re: Computer architects leaving Intel...20David Brown
16 Sep15:04  i    i i i i    iii   ii iii  i iii ii    +* Re: Computer architects leaving Intel...14Michael S
17 Sep08:07  i    i i i i    iii   ii iii  i iii ii    `* Re: Computer architects leaving Intel...5Terje Mathisen
15 Sep06:42  i    i i i i    iii   ii iii  i iii i`* Re: Computer architects leaving Intel...2BGB
14 Sep21:00  i    i i i i    iii   ii iii  i iii +* Re: Computer architects leaving Intel...3Thomas Koenig
16 Sep03:32  i    i i i i    iii   ii iii  i iii `* Re: Computer architects leaving Intel...5Tim Rentsch
6 Sep 24  i    i i i i    iii   ii iii  i ii+* Re: Computer architects leaving Intel...3Tim Rentsch
7 Sep 24  i    i i i i    iii   ii iii  i ii`* Re: Computer architects leaving Intel...9Chris M. Thomasson
5 Sep 24  i    i i i i    iii   ii iii  i i`* Re: Computer architects leaving Intel...2MitchAlsup1
5 Sep 24  i    i i i i    iii   ii iii  i `* Re: Computer architects leaving Intel...2MitchAlsup1
7 Sep 24  i    i i i i    iii   ii iii  `- Re: Computer architects leaving Intel...1Tim Rentsch
4 Sep 24  i    i i i i    iii   ii ii`* Re: Computer architects leaving Intel...3Thomas Koenig
6 Sep 24  i    i i i i    iii   ii i`- Re: Computer architects leaving Intel...1Chris M. Thomasson
4 Sep 24  i    i i i i    iii   ii +- Re: Computer architects leaving Intel...1jseigh
13 Sep 24  i    i i i i    iii   ii `* Re: Computer architects leaving Intel...2Stephen Fuld
3 Sep 24  i    i i i i    iii   i`* Re: Computer architects leaving Intel...30Stefan Monnier
3 Sep 24  i    i i i i    iii   `* Re: Computer architects leaving Intel...10Terje Mathisen
31 Aug 24  i    i i i i    ii`* Re: Computer architects leaving Intel...3Thomas Koenig
1 Sep 24  i    i i i i    i`* Re: Computer architects leaving Intel...120David Brown
1 Sep 24  i    i i i i    +* Re: Computer architects leaving Intel...3John Dallman
3 Sep 24  i    i i i i    `- Re: Computer architects leaving Intel...1Stefan Monnier
30 Aug 24  i    i i i +- Re: Computer architects leaving Intel...1MitchAlsup1
30 Aug 24  i    i i i +* Re: Computer architects leaving Intel...4Stefan Monnier
30 Aug 24  i    i i i `* Re: Computer architects leaving Intel...2John Dallman
8 Sep 24  i    i i `* Re: Computer architects leaving Intel...175Tim Rentsch
30 Aug 24  i    i `* Re: Computer architects leaving Intel...10MitchAlsup1
31 Aug 24  i    `* Re: Computer architects leaving Intel...11Paul A. Clayton
29 Aug 24  `* Re: Computer architects leaving Intel...6Anton Ertl

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal