Re: Stealing a Great Idea from the 6600

Liste des GroupesRevenir à c arch 
Sujet : Re: Stealing a Great Idea from the 6600
De : cr88192 (at) *nospam* gmail.com (BGB)
Groupes : comp.arch
Date : 21. Apr 2024, 09:17:39
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v02eij$6d5b$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12
User-Agent : Mozilla Thunderbird
On 4/20/2024 5:03 PM, MitchAlsup1 wrote:
BGB wrote:
 
On 4/20/2024 12:07 PM, MitchAlsup1 wrote:
John Savard wrote:
>
On Sat, 20 Apr 2024 01:09:53 -0600, John Savard
<quadibloc@servername.invalid> wrote:
>
>
And, hey, I'm not the first guy to get sunk because of forgetting what
lies under the tip of the iceberg that's above the water.
>
That also happened to the captain of the _Titanic_.
>
Concer-tina-tanic !?!
>
 
Seems about right.
Seems like a whole lot of flailing with designs that seem needlessly complicated...
  
Meanwhile, has looked around and noted:
In some ways, RISC-V is sort of like MIPS with the field order reversed,
 They, in effect, Litle-Endian-ed the fields.
 
Yeah.

and (ironically) actually smaller immediate fields (MIPS was using a lot of Imm16 fields. whereas RISC-V mostly used Imm12).
 Yes, RISC-V took a step back with the 12-bit immediates. My 66000, on
the other hand, only has 12-bit immediates for shift instructions--
allowing all shifts to reside in one Major OpCode; the rest inst[31]=1
have 16-bit immediates (universally sign extended).
 
I had gone further and used mostly 9/10 bit fields (mostly expanded to 10/12 in XG2).
I don't really think this is a bad choice in a statistical sense (as it so happens, most of the immediate values can fit into a 9-bit field, without going too far into "diminishing returns" territory).
Ended up with some inconsistency when expanding to 10 bits:
   Displacements went 9u -> 10s
   ADD/SUB: 9u/9n -> 10u/10n
   AND: 9u -> 10s
   OR,XOR: 9u -> 10u
And was initially 9u->10u (like OR and XOR), but changed over at the last minute:
   Negative masks were far more common than 10-bit masks;
   At the moment, the change didn't seem to break anything;
   I didn't really have any other encoding space to put this.
     The main "sane" location to put it was already taken by RSUB;
     The Imm9 space is basically already full.
With OR and XOR, negative masks are essentially absent, so switching these to signed would not make sense; even if this breaks the symmetry between AND/OR/XOR.

But, seemed to have more wonk:
A mode with 32x 32-bit GPRs; // unnecessary
A mode with 32x 64-bit GPRs;
Apparently a mode with 32x 32-bit GPRs that can be paired to 16x 64-bits as needed for 64-bit operations?...
 Repeating the mistake I made on Mc 88100....
 
I had saw a video talking about the Nintendo 64, and it was saying that the 2x paired 32-bit register mode was used more often than the native 64-bit mode, as the native 64-bit mode was slower as apparently it couldn't fully pipeline the 64-bit ops, so using it in this mode came at a performance hit (vs using it to run glorified 32-bit code).

Integer operations (on 64-bit registers) that give UB or trap if values are outside of signed Int32 range;
 Isn't it just wonderful ??
 
No direct equivalent in my case, nor any desire to add these.
Preferable I think if the behavior of instructions is consistent across implementations, though OTOH can claim strict 1:1 between my Verilog implementation and emulator, but at least I try to keep things consistent.
Though, things fall short of strict 100% consistency between the Verilog implementation and emulator (usually in cases where the emulator will trap, but the Verilog implementation will "do whatever").
Though, in part, this is because the emulator serves the secondary purpose of linting the compiler output.
Though, partly it is a case of, not even trapping is entirely free.

Other operations that sign-extend the values but are ironically called "unsigned" (apparently, similar wonk to RISC-V by having signed-extended Unsigned Int);
Branch operations are bit-sliced;
....
 
I had preferred a different strategy in some areas:
   Assume non-trapping operations by default;
 Assume trap/"do the expected thing" under a user accessible flag.
Most are defined in ways that I feel are sensible.
For ALU this means one of:
   64-bit result;
   Sign-extended from 32-bit result;
   Zero extended from 32-bit result.

   Sign-extend signed values, zero-extend unsigned values.
 Another mistake I mad in Mc 88100.
 Do you sign extend the 16-bit displacement on an unsigned LD ??
 
In my case; for the Baseline encoding, Ld/St displacements were unsigned only.
For XG2, they are signed. It was a tight call, but the sign-extended case won out by an admittedly thin margin in this case.
Granted, this means that the Load/Store ops with a Disp5u/Disp6s encodings are mostly redundant in XG2, but are the only way to directly encode negative displacements in Baseline+XGPR (in pure Baseline, negative Ld/St displacements being N/E).
But, as for values in registers, I personally feel that my scheme (as a direct extension of the scheme that C itself seems to use) works better than the one used by MIPS and RISC-V, which seems needlessly wonky with a bunch of edge cases (that end up ultimately requiring the ISA to care more about the size and type of the value rather than less).
Then again, x86-64 and ARM64 went the other direction (always zero extending the 32-bit values).
Then again, it seems like a case where spending more in one area can save cost in others.

Though, this is partly the source of some operations in my case assuming 33 bit sign-extended: This can represent both the signed and unsigned 32-bit ranges.
 These are some of the reasons My 66000 is 64-bit register/calculation only.
 
It is a tradeoff.
Many operations are full 64-bit.
Load/Store and Branch displacements have tended to be 33 bit to save cost over 48 bit displacements (with a 48-bit address space, with 16-bits for optional type-tags or similar).
Though, this does theoretically add a penalty if "size_t" or "long" or similar is used as an array index (rather than "int" or smaller), since in this case the compiler will need to fall back to ALU operations to perform the index operation (similar to what typically happens for array indexing on RISC-V).
Mostly not a huge issue, as pretty much all the code seems to use 'int' for array indices.
Optionally, can enable the use of 48-bit displacements, but not really worth much if they are not being used (similar issue for the 96-bit addressing thing).
Even 48-bits is overkill when one can fit the entirety of both RAM and secondary storage into the address space.
Kind of a very different situation from 16-bit days, where people were engaging in wonk to try to fit in more RAM than they had address space...
Well, nevermind a video where a guy managed to get a 486 PC working with no SIMM's, only apparently some on-board RAM on the MOBO, and some ISA RAM-expansion cards (apparently intended for the 80286).
Apparently he was getting Doom framerates (in real-time) almost on-par with what I am seeing in Verilog simulations (roughly 11 seconds per frame at the moment; simulation running approximately 250x slower than real-time).

One could argue that sign-extending both could save 1 bit in some cases. But, this creates wonk in other cases, such as requiring an explicit zero extension for "unsigned int" to "long long" casts; and more cases where separate instructions are needed for Int32 and Int64 cases (say, for example, RISC-V needed around 4x as many Int<->Float conversion operators due to its design choices in this area).
 It also gets difficult when you consider EADD Rd,Rdouble,Rexponent ??
is it a FP calculation or an integer calculation ?? If Rdouble is a
constant is the constant FP or int, if Rexponent is a constant is it
double or int,..... Does it raise FP overflow or integer overflow ??
 
Dunno, neither RISC-V nor BJX2 has this...

Say:
   RV64:
     Int32<->Binary32, UInt32<->Binary32
     Int64<->Binary32, UInt64<->Binary32
     Int32<->Binary64, UInt32<->Binary64
     Int64<->Binary64, UInt64<->Binary64
   BJX2:
     Int64<->Binary64, UInt64<->Binary64
     My 66000:
       int64_t  -> { uint64_t, float,   double }
       uint64_t -> {  int64_t, float,   double }
       float    -> { uint64_t, int64_t, double }
       double   -> { uint64_t, int64_t, float  }
 
I originally just had two instructions (FLDCI and FSTCI), but gave in an added more, because, say:
   MOV       0x8000000000000000, R3
   TEST.Q    R4, R3
   SHLD.Q?F  R4, -1, R4
   FLDCI     R4, R2
   FADD?F    R2, R2, R2
Is more wonk than ideal...
Technically, also the logic (for the unsigned variants) had already been added to the CPU core for sake of RV64G.
Technically, the logic for Int32 and Binary32 cases exists, but I feel less incentive to add them for sake of:
All the scalar math is being done as Binary64;
With the sign/zero extension scheme, separate 32-bit forms don't add much.

With the Uint64 case mostly added because otherwise one needs a wonky edge case to deal with this (but is rare in practice).
 
The separate 32-bit cases were avoided by tending to normalize everything to Binary64 in registers (with Binary32 only existing in SIMD form or in memory).
 I saved LD and ST instructions by leaving float 32-bits in the registers.
 
I had originally went with just using a 32-bit load/store, along with a (Binary32<->Binary64) conversion instruction.
Eventually went and added FMOV.S as a combined Load/Store and Convert, mostly because MOV.L+FLDCF was not ideal for performance in programs like Quake (where "float" is not exactly a rarely used type).
On a low-cost core, most sensible option is to fall back to explicit convert (at least, on cores that can still afford an FPU).
But, as I see it, LD/ST with built in convert is likely a cheaper option than making every other FPU related instruction need to deal with every floating-point format.

Annoyingly, I did end up needing to add logic for all of these cases to deal with RV64G.
 No rest for the wicked.....
 
It is a bit wonky, as I dealt with the scalar Binary32 ops for RV mostly by routing them through the logic for the SIMD ops. At least as far as most code should be concerned, it is basically the same (even if it does technically deviate from the RV64 spec, which defines the high bits of the register as encoding a NaN).
Technically does allow for a very lazy FP-SIMD extension to RV64G (doesn't even require adding any new instructions...).

Currently no plans to implement RISC-V's Privileged ISA stuff, mostly because it would likely be unreasonably expensive.
 The sea of control registers or the sequencing model applied thereon ??
My 66000 allows access to all control registers via memory mapped I/O space.
 
That, and the need for 3+ copies of the register file (for each operating mode), and the need for a hardware page-table walker, ...

                                                    It is in theory possible to write an OS to run in RISC-V mode, but it would need to deal with the different OS level and hardware-level interfaces (in much the same way, as I needed to use a custom linker script for GCC, as my stuff uses a different memory map from the one GCC had assumed; namely that of RAM starting at the 64K mark, rather than at the 16MB mark).
  
In some cases in my case, there are distinctions between 32-bit and 64-bit compare-and-branch ops. I am left thinking this distinction may be unnecessary, and one may only need 64 bit compare and branch.
 No 32-bit stuff, thereby no 32-bit distinctions needed.
 
In the emulator, the current difference ended up mostly that the 32-bit version sees if the 32-bit and 64-bit version would give a different result and faulting if so, since this generally means that there is a bug elsewhere (such as other code is producing out-of-range values).
 Saving vast amounts of power {{{not}}}
 
For the Verilog version, option is more like:
   Just always do the 64-bit version.
Nevermind that it has wasted 1 bit of encoding entropy on being able to specify 32 and 64 bit compare, in cases when it doesn't actually matter...
It wasn't until some time later (after originally defining the encodings), that I started to realize how much it didn't matter.

For a few newer cases (such as the 3R compare ops, which produce a 1-bit output in a register), had only defined 64-bit versions.
 Oh what a tangled web we.......
 
In "other ISA", these would be given different names:
   SLT, SLTU, SLTI, SLTIU, ...
But, I ended up with:
   CMPQEQ Rm, Ro, Rn
   CMPQNE Rm, Ro, Rn
   CMPQGT Rm, Ro, Rn
   CMPQGE Rm, Ro, Rn
   CMPQEQ Rm, Imm5u/Imm6s, Rn
   CMPQNE Rm, Imm5u/Imm6s, Rn
   CMPQGT Rm, Imm5u/Imm6s, Rn
   CMPQLT Rm, Imm5u/Imm6s, Rn
Though, SLT and CMPQGT are basically the same operation, just conceptually with the inputs flipped.
The Imm5u/Imm6s (5u in Baseline, 6s in XG2) forms differ slightly in that one can't flip the arguments, but the difference between GT and GE is subtracting 1 from the immediate...
I am also left deciding if the modified XG2 jumbo prefix rules (that hacked F2 block instructions to Imm64) should be applied to the F0 block to effectively extend Imm29s to Imm33s.

One could just ignore the distinction between 32 and 64 bit compare in hardware, but had still burnt the encoding space on this. In a new ISA design, I would likely drop the existence of 32-bit compare and use exclusively 64-bit compare.
 
In many cases, the distinction between 32-bit and 64-bit operations, or between 2R and 3R cases, had ended up less significant than originally thought (and now have ended up gradually deprecating and disabling some of the 32-bit 2R encodings mostly due to "lack of relevance").
 I deprecated all of them.
 
The vast majority of the 2R ops are things like "Convert A into B" or similar.
But:
   ADD Rm, Rn
Is kinda moot:
   ADD Rn, Rm, Rn
Does the same thing.
Though:
   MOV     Rm, Rn
   EXTS.L  Rm, Rn
   EXTU.L  Rm, Rn
Could almost be turned into:
   ADD     Rm, 0, Rn
   ADDS.L  Rm, 0, Rn
   ADDU.L  Rm, 0, Rn
Nevermind any potential wonk involved to maintain a 1 cycle latency.
Or, given the large numbers of these instructions in my compiler's output, MOV or EXTS.L dropping to a 2-cycle latency has a fairly obvious impact on performance.

Though, admittedly, part of the reason for a lot of separate 2R cases existing was that I had initially had the impression that there may have been a performance cost difference between 2R and 3R instructions. This ended up not really the case, as the various units ended up typically using 3R internally anyways.
 
So, say, one needs an ALU with, say:
   2 inputs, one output;
you forgot carry, and inversion to perform subtraction.
   Ability to bit-invert the second input
     along with inverting carry-in, ...
   Ability to sign or zero extend the output.
 So, My 66000 integer adder has 3 carry inputs, and I discovered a way to
perform these that takes no more gates of delay than the typical 1-carry
in 64-bit integer adder. This gives me a = -b -c; for free.
 
The ALU design in my case does not support inverting arbitrary inputs, only doing ADD/SUB, in various forms.
The Lane 1 ALU also does CMP and a bunch of CONV stuff, whereas the Lane 2/3 ALUs are more minimal.

So, say, operations:
   ADD / SUB (Add, 64-bit)
   ADDSL / SUBSL (Add, 32-bit, sign extent)  // nope
   ADDUL / SUBUL (Add, 32-bit, zero extent)  // nope
   AND
   OR
   XOR
   CMPEQ                                     // 1 ICMP inst
   CMPNE
   CMPGT (CMPLT implicit)
   CMPGE (CMPLE implicit)
   CMPHI (unsigned GT)
   CMPHS (unsigned GE)
....
 
Where, internally compare works by performing a subtract and then producing a result based on some status bits (Z,C,S,O). As I see it, ideally these bits should not be exposed at the ISA level though (much pain and hair results from the existence of architecturally visible ALU status-flag bits).
 I agree that these flags should not be exposed through ISA; and I did not.
On the other hand multi-precision arithmetic demands at least carry {or
some other means which is even more powerful--such as CARRY.....}
 
Yeah...
I just sorta carried over the same old ADDC/SUBC instructions from SH.
Never got upgraded to 3R forms either.

Some other features could still be debated though, along with how much simplification could be possible.
 
If I did a new design, would probably still keep predication and jumbo prefixes.
 I kept predication but not the way most predication works.
My work on Mc 88120 and K9 taught me the futility of things in the
instruction stream that provide artificial boundaries. I have a suspicion
that if you have the FPGA capable of allowing you to build a 8-wide machine, you would do the jumbo stuff differently, too.
 
Probably.
When I considered a WEX-6W design, a lot of stuff for how things work in 2W or 3W configurations did not scale. I had considered a different way for how bundling would work and how prefixes would work, and effectively broke the symmetry between scalar execution and bundled execution.
The idea for WEX-6W was quickly abandoned when it started to become obvious that a single 6-wide core would be more expensive than two 3-wide cores (at least, absent more drastic measures like partitioning the register space and/or eliminating the use of register forwarding).
In effect, it likely would have either ended up looking like a more conventional "true" VLIW; or so expensive that it blows out the LUT budget on the XC7A100T.

Explicit bundling vs superscalar could be argued either way, as superscalar isn't as expensive as initially thought, but in a simpler form is comparably weak (the compiler has an advantage that it can invest more expensive analysis into this, reorder instructions, etc; but this only goes so far as the compiler understands the CPU's pipeline,
 Compilers are notoriously unable to outguess a good branch predictor.
 
Errm, assuming the compiler is capable of things like general-case inlining and loop-unrolling.
I was thinking of simpler things, like shuffling operators between independent (sub)expressions to limit the number of register-register dependencies.
Like, in-order superscalar isn't going to do crap if nearly every instruction depends on every preceding instruction. Even pipelining can't help much with this.
The compiler can shuffle the instructions into an order to limit the number of register dependencies and better fit the pipeline. But, then, most of the "hard parts" are already done (so it doesn't take much more for the compiler to flag which instructions can run in parallel).
Meanwhile, a naive superscalar may miss cases that could be run in parallel, if it is evaluating the rules "coarsely" (say, evaluating what is safe or not safe to run things in parallel based on general groupings of opcodes rather than the rules of specific opcodes; or, say, false-positive register alias if, say, part of the Imm field of a 3RI instruction is interpreted as a register ID, ...).
Granted, seemingly even a naive approach is able to get around 20% ILP out of "GCC -O3" output for RV64G...
But, the GCC output doesn't seem to be quite as weak as some people are claiming either.

ties the code to a specific pipeline structure, and becomes effectively moot with OoO CPU designs).
 OoO exists, in a practical sense, to abstract the pipeline out of the compiler; or conversely, to allow multiple implementations to run the
same compiled code optimally on each implementation.
 
Granted, but OoO isn't cheap.

So, a case could be made that a "general use" ISA be designed without the use of explicit bundling. In my case, using the bundle flags also requires the code to use an instruction to signal to the CPU what configuration of pipeline it expects to run on, with the CPU able to fall back to scalar (or superscalar) execution if it does not match.
 Sounds like a bridge too far for your 8-wide GBOoO machine.
 
For sake of possible fancier OoO stuff, I upheld a basic requirement for the instruction stream:
The semantics of the instructions as executed in bundled order needs to be equivalent to that of the instructions as executed in sequential order.
In this case, the OoO CPU can entirely ignore the bundle hints, and treat "WEXMD" as effectively a NOP.
This would have broken down for WEX-5W and WEX-6W (where enforcing a parallel==sequential constraint effectively becomes unworkable, and/or renders the wider pipeline effectively moot), but these designs are likely dead anyways.
And, with 3-wide, the parallel==sequential order constraint remains in effect.

For the most part, thus far nearly everything has ended up as "Mode 2", namely:
   3 lanes;
     Lane 1 does everything;
     Lane 2 does Basic ALU ops, Shift, Convert (CONV), ...
     Lane 3 only does Basic ALU ops and a few CONV ops and similar.
       Lane 3 originally also did Shift, dropped to reduce cost.
     Mem ops may eat Lane 3, ...
 Try 6-lanes:
    1,2,3 Memory ops + integer ADD and Shifts
    4     FADD   ops + integer ADD and FMisc
    5     FMAC   ops + integer ADD
    6     CMP-BR ops + integer ADD
 
As can be noted, my thing is more a "LIW" rather than a "true VLIW".
So, MEM/BRA/CMP/... all end up in Lane 1.
Lanes 2/3 effectively ending up used for fold over most of the ALU ops turning Lane 1 mostly into a wall of Load and Store instructions.

Where, say:
   Mode 0 (Default):
     Only scalar code is allowed, CPU may use superscalar (if available).
   Mode 1:
     2 lanes:
       Lane 1 does everything;
       Lane 2 does ALU, Shift, and CONV.
     Mem ops take up both lanes.
       Effectively scalar for Load/Store.
       Later defined that 128-bit MOV.X is allowed in a Mode 1 core.
Modeless.
 

Had defined wider modes, and ones that allow dual-lane IO and FPU instructions, but these haven't seen use (too expensive to support in hardware).
 
Had ended up with the ambiguous "extension" to the Mode 2 rules of allowing an FPU instruction to be executed from Lane 2 if there was not an FPU instruction in Lane 1, or allowing co-issuing certain FPU instructions if they effectively combine into a corresponding SIMD op.
 
In my current configurations, there is only a single memory access port.
 This should imply that your 3-wide pipeline is running at 90%-95% memory/cache saturation.
 
If you mean that execution is mostly running end-to-end memory operations, yeah, this is basically true.
Comparably, RV code seems to end up running a lot of non-memory ops in Lane 1, whereas BJX2 is mostly running lots of memory ops, with Lane 2 handling most of the ALU ops and similar (and Lane 3, occasionally).

A second memory access port would help with performance, but is comparably a rather expensive feature (and doesn't help enough to justify its fairly steep cost).
 
For lower-end cores, a case could be made for assuming a 1-wide CPU with a 2R1W register file, but designing the whole ISA around this limitation and not allowing for anything more is limiting (and mildly detrimental to performance). If we can assume cores with an FPU, we can probably also assume cores with more than two register read ports available.
 If you design around the notion of a 3R1W register file, FMAC and INSERT
fall out of the encoding easily. Done right, one can switch it into a 4R
or 4W register file for ENTER and EXIT--lessening the overhead of call/ret.
 
Possibly.
It looks like some savings could be possible in terms of prologs and epilogs.
As-is, these are generally like:
   MOV    LR, R18
   MOV    GBR, R19
   ADD    -192, SP
   MOV.X  R18, (SP, 176)  //save GBR and LR
   MOV.X  ...  //save registers
   WEXMD  2  //specify that we want 3-wide execution here
   //Reload GBR, *1
   MOV.Q  (GBR, 0), R18
   MOV    0, R0  //special reloc here
   MOV.Q  (GBR, R0), R18
   MOV    R18, GBR
   //Generate Stack Canary, *2
   MOV    0x5149, R18  //magic number (randomly generated)
   VSKG   R18, R18  //Magic (combines input with SP and magic numbers)
   MOV.Q  R18, (SP, 144)
   ...
   function-specific stuff
   ...
   MOV    0x5149, R18
   MOV.Q  (SP, 144), R19
   VSKC   R18, R19  //Validate canary
   ...
*1: This part ties into the ABI, and mostly exists so that each PE image can get GBR reloaded back to its own ".data"/".bss" sections (with multiple program instances in a single address space). But, does mean that pretty much every non-leaf function ends up needing to go through this ritual.
*2: Pretty much any function that has local arrays or similar, serves to protect register save area. If the magic number can't regenerate a matching canary at the end of the function, then a fault is generated.
The cost of some of this starts to add up.
In isolation, not much, but if all this happens, say, 500 or 1000 times or more in a program, this can add up.

....

Date Sujet#  Auteur
17 Apr 24 * Stealing a Great Idea from the 6600128John Savard
18 Apr 24 +* Re: Stealing a Great Idea from the 6600125MitchAlsup1
18 Apr 24 i`* Re: Stealing a Great Idea from the 6600124John Savard
18 Apr 24 i `* Re: Stealing a Great Idea from the 6600123MitchAlsup1
19 Apr 24 i  `* Re: Stealing a Great Idea from the 6600122John Savard
19 Apr 24 i   `* Re: Stealing a Great Idea from the 6600121John Savard
19 Apr 24 i    `* Re: Stealing a Great Idea from the 6600120MitchAlsup1
20 Apr 24 i     +* Re: Stealing a Great Idea from the 66002John Savard
21 Apr 24 i     i`- Re: Stealing a Great Idea from the 66001John Savard
20 Apr 24 i     `* Re: Stealing a Great Idea from the 6600117John Savard
20 Apr 24 i      `* Re: Stealing a Great Idea from the 6600116John Savard
20 Apr 24 i       `* Re: Stealing a Great Idea from the 6600115MitchAlsup1
20 Apr 24 i        +* Re: Stealing a Great Idea from the 6600105BGB
21 Apr 24 i        i`* Re: Stealing a Great Idea from the 6600104MitchAlsup1
21 Apr 24 i        i +* Re: Stealing a Great Idea from the 660063John Savard
21 Apr 24 i        i i+* Re: Stealing a Great Idea from the 660015John Savard
25 Apr 24 i        i ii`* Re: Stealing a Great Idea from the 660014Lawrence D'Oliveiro
25 Apr 24 i        i ii +* Re: Stealing a Great Idea from the 660012MitchAlsup1
25 Apr 24 i        i ii i+- Re: Stealing a Great Idea from the 66001Lawrence D'Oliveiro
30 Apr 24 i        i ii i`* Re: a bit of history, Stealing a Great Idea from the 660010John Levine
3 May 24 i        i ii i `* Re: a bit of history, Stealing a Great Idea from the 66009Anton Ertl
3 May 24 i        i ii i  +* Re: a bit of history, Stealing a Great Idea from the 66007John Levine
4 May 24 i        i ii i  i`* Re: a bit of history, Stealing a Great Idea from the 66006Thomas Koenig
4 May 24 i        i ii i  i +* Re: a bit of history, Stealing a Great Idea from the 66004John Levine
4 May 24 i        i ii i  i i`* Re: a bit of history, Stealing a Great Idea from the 66003MitchAlsup1
5 May 24 i        i ii i  i i `* Re: a bit of history, Stealing a Great Idea from the 66002Thomas Koenig
5 May 24 i        i ii i  i i  `- Re: a bit of history, Stealing a Great Idea from the 66001MitchAlsup1
28 Jul 24 i        i ii i  i `- Re: a bit of history, Stealing a Great Idea from the 66001Lawrence D'Oliveiro
3 May 24 i        i ii i  `- Re: a bit of history, Stealing a Great Idea from the 66001MitchAlsup1
25 Apr 24 i        i ii `- Re: Stealing a Great Idea from the 66001John Savard
21 Apr 24 i        i i`* Re: Stealing a Great Idea from the 660047MitchAlsup1
23 Apr 24 i        i i +* Re: Stealing a Great Idea from the 660045George Neuner
23 Apr 24 i        i i i`* Re: Stealing a Great Idea from the 660044MitchAlsup1
25 Apr 24 i        i i i `* Re: Stealing a Great Idea from the 660043George Neuner
26 Apr 24 i        i i i  `* Re: Stealing a Great Idea from the 660042BGB
26 Apr 24 i        i i i   `* Re: Stealing a Great Idea from the 660041MitchAlsup1
26 Apr 24 i        i i i    +* Re: Stealing a Great Idea from the 66002Anton Ertl
26 Apr 24 i        i i i    i`- Re: Stealing a Great Idea from the 66001MitchAlsup1
26 Apr 24 i        i i i    +* Re: Stealing a Great Idea from the 66004BGB
26 Apr 24 i        i i i    i+* Re: Stealing a Great Idea from the 66002MitchAlsup1
27 Apr 24 i        i i i    ii`- Re: Stealing a Great Idea from the 66001BGB
26 Apr 24 i        i i i    i`- Re: Stealing a Great Idea from the 66001MitchAlsup1
27 Apr 24 i        i i i    `* Re: Stealing a Great Idea from the 660034BGB
27 Apr 24 i        i i i     `* Re: Stealing a Great Idea from the 660033MitchAlsup1
28 Apr 24 i        i i i      `* Re: Stealing a Great Idea from the 660032BGB
28 Apr 24 i        i i i       `* Re: Stealing a Great Idea from the 660031MitchAlsup1
28 Apr 24 i        i i i        `* Re: Stealing a Great Idea from the 660030BGB
28 Apr 24 i        i i i         +* Re: Stealing a Great Idea from the 660024BGB
28 Apr 24 i        i i i         i`* Re: Stealing a Great Idea from the 660023BGB
28 Apr 24 i        i i i         i `* Re: Stealing a Great Idea from the 660022Thomas Koenig
28 Apr 24 i        i i i         i  `* Re: Stealing a Great Idea from the 660021BGB
28 Apr 24 i        i i i         i   `* Re: Stealing a Great Idea from the 660020BGB
28 Apr 24 i        i i i         i    +* Re: Stealing a Great Idea from the 66002Thomas Koenig
28 Apr 24 i        i i i         i    i`- Re: Stealing a Great Idea from the 66001BGB
29 Jul 24 i        i i i         i    +* Re: Stealing a Great Idea from the 660016Lawrence D'Oliveiro
29 Jul 24 i        i i i         i    i+* Re: Stealing a Great Idea from the 66006BGB
30 Jul 24 i        i i i         i    ii`* Re: Stealing a Great Idea from the 66005Lawrence D'Oliveiro
30 Jul 24 i        i i i         i    ii `* Re: Stealing a Great Idea from the 66004BGB
31 Jul 24 i        i i i         i    ii  `* Re: Stealing a Great Idea from the 66003Lawrence D'Oliveiro
31 Jul 24 i        i i i         i    ii   `* Re: Stealing a Great Idea from the 66002BGB
1 Aug 24 i        i i i         i    ii    `- Re: Stealing a Great Idea from the 66001Lawrence D'Oliveiro
29 Jul 24 i        i i i         i    i`* Re: Stealing a Great Idea from the 66009Terje Mathisen
29 Jul 24 i        i i i         i    i `* Re: Stealing a Great Idea from the 66008MitchAlsup1
30 Jul 24 i        i i i         i    i  +- Re: Stealing a Great Idea from the 66001Lawrence D'Oliveiro
30 Jul 24 i        i i i         i    i  +* Re: Stealing a Great Idea from the 66004Michael S
30 Jul 24 i        i i i         i    i  i`* Re: Stealing a Great Idea from the 66003MitchAlsup1
31 Jul 24 i        i i i         i    i  i `* Re: Stealing a Great Idea from the 66002BGB
1 Aug 24 i        i i i         i    i  i  `- Re: Stealing a Great Idea from the 66001Lawrence D'Oliveiro
1 Aug 24 i        i i i         i    i  `* Re: Stealing a Great Idea from the 66002Thomas Koenig
1 Aug 24 i        i i i         i    i   `- Re: Stealing a Great Idea from the 66001MitchAlsup1
29 Jul 24 i        i i i         i    `- Re: Stealing a Great Idea from the 66001George Neuner
28 Apr 24 i        i i i         `* Re: Stealing a Great Idea from the 66005MitchAlsup1
28 Apr 24 i        i i i          `* Re: Stealing a Great Idea from the 66004BGB
29 Apr 24 i        i i i           `* Re: Stealing a Great Idea from the 66003MitchAlsup1
29 Apr 24 i        i i i            `* Re: Stealing a Great Idea from the 66002BGB
29 Apr 24 i        i i i             `- Re: Stealing a Great Idea from the 66001Thomas Koenig
29 Apr 24 i        i i `- Re: Stealing a Great Idea from the 66001Tim Rentsch
21 Apr 24 i        i `* Re: Stealing a Great Idea from the 660040BGB
21 Apr 24 i        i  `* Re: Stealing a Great Idea from the 660039MitchAlsup1
22 Apr 24 i        i   +* Re: Stealing a Great Idea from the 66003BGB
22 Apr 24 i        i   i`* Re: Stealing a Great Idea from the 66002MitchAlsup1
22 Apr 24 i        i   i `- Re: Stealing a Great Idea from the 66001BGB
22 Apr 24 i        i   +* Re: Stealing a Great Idea from the 66002John Savard
22 Apr 24 i        i   i`- Re: Stealing a Great Idea from the 66001BGB
22 Apr 24 i        i   `* Re: Stealing a Great Idea from the 660033Terje Mathisen
22 Apr 24 i        i    +- Re: Stealing a Great Idea from the 66001BGB
13 Jun 24 i        i    `* Re: Stealing a Great Idea from the 660031Kent Dickey
13 Jun 24 i        i     +* Re: Stealing a Great Idea from the 660016Stefan Monnier
13 Jun 24 i        i     i`* Re: Stealing a Great Idea from the 660015BGB
13 Jun 24 i        i     i `* Re: Stealing a Great Idea from the 660014MitchAlsup1
14 Jun 24 i        i     i  `* Re: Stealing a Great Idea from the 660013BGB
18 Jun 24 i        i     i   `* Re: Stealing a Great Idea from the 660012MitchAlsup1
19 Jun 24 i        i     i    +* Re: Stealing a Great Idea from the 66008BGB
19 Jun 24 i        i     i    i`* Re: Stealing a Great Idea from the 66007MitchAlsup1
19 Jun 24 i        i     i    i +* Re: Stealing a Great Idea from the 66005BGB
19 Jun 24 i        i     i    i i`* Re: Stealing a Great Idea from the 66004MitchAlsup1
20 Jun 24 i        i     i    i i `* Re: Stealing a Great Idea from the 66003Thomas Koenig
20 Jun 24 i        i     i    i i  `* Re: Stealing a Great Idea from the 66002MitchAlsup1
21 Jun 24 i        i     i    i i   `- Re: Stealing a Great Idea from the 66001Thomas Koenig
20 Jun 24 i        i     i    i `- Re: Stealing a Great Idea from the 66001John Savard
19 Jun 24 i        i     i    +- Re: Stealing a Great Idea from the 66001Thomas Koenig
20 Jun 24 i        i     i    +- Re: Stealing a Great Idea from the 66001MitchAlsup1
31 Jul 24 i        i     i    `- Re: Stealing a Great Idea from the 66001Lawrence D'Oliveiro
13 Jun 24 i        i     +* Re: Stealing a Great Idea from the 660013MitchAlsup1
14 Jun 24 i        i     `- Re: Stealing a Great Idea from the 66001Terje Mathisen
22 Apr 24 i        `* Re: Stealing a Great Idea from the 66009John Savard
18 Apr 24 `* Re: Stealing a Great Idea from the 66002Lawrence D'Oliveiro

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal