Sujet : Re: Concertlina II: Full Circle
De : bohannonindustriesllc (at) *nospam* gmail.com (BGB-Alt)
Groupes : comp.archDate : 18. Jun 2024, 22:39:23
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v4suqd$1h9f5$1@dont-email.me>
References : 1 2 3 4 5 6
User-Agent : Mozilla Thunderbird
On 6/18/2024 11:52 AM, MitchAlsup1 wrote:
John Savard wrote:
On Mon, 17 Jun 2024 23:17:27 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
No, it is not a memref--it is a return ! using the register from the VEC instruction.
As should not surprise you, I was referring to the end-of-loop
instruction in my current Concertina II, not the one in your MY 66000.
It may surprise you to know that I knew and know that you are talking about Concer-tina-tanic.
I was merely trying to show you another way to get back to the top of a loop--one that takes way fewer bits to encode.
I try to avoid stacks, and reserving extra registers, as much as I
can.
My LOOP has no stack.
My random thought for a loop instruction:
LOOP Rn, Disp
Which behaves like, say:
if((Rn--)>=0)
goto Label;
Granted, wouldn't work for most typical "for()" loops (which would need a count-up value).
Otherwise, had recently been experimenting with a type of neural net that seems promising:
Inputs vectors have 1 or 2 bit elements;
Weights are 3 bits;
The accumulator and bias are 8 bits.
Implemented as a 3R, 3R1W instruction; where the instruction shifts the destination left by 1 or 2 bits and adds the output into the LSB. The instruction would effectively handle 16 inputs at a time (as 16 or 32 bits), with a 64-bit weights and bias vector.
Or, say (for 1-bit outputs):
Rn=(Rn<<1) | ((w0*v0+w1*v1+...+w15*v15+bias)>=0);
The 2*3 bit multipliers are relatively cheap to implement in an FPGA.
In some small scale glyph classifier tests (as a mock-up for the logic of the instruction), it worked fairly well, easily able to achieve 100% accuracy within the scope of the test (and needs significantly fewer instructions to evaluate than a neural net implemented via Binary16 SIMD).
Next step might be to try to scale it up to deal with the "hand written digit" tests.
And, also plays nicer with normal binary values (main awkward step is mostly shuffling the bits around on the input side). In this case, mostly pulled off by getting creative with the use of Morton shuffles.
Also allows for a comparably fast plain software implementation as well.
Currently, training algo was based around using a genetic algorithm (can't backprop this thing).
...
John Savard