Sujet : Re: Differentiable Forth
De : minforth (at) *nospam* gmx.net (minforth)
Groupes : comp.lang.forthDate : 22. Jul 2024, 09:00:53
Autres entêtes
Organisation : novaBBS
Message-ID : <01d971a7fa48b2a6bfc0202d9dd96c2b@www.novabbs.com>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14
User-Agent : Rocksolid Light
On Sat, 20 Jul 2024 20:55:47 +0000, mhx wrote:
On Sat, 20 Jul 2024 18:12:31 +0000, Paul Rubin wrote:
>
mhx@iae.nl (mhx) writes:
Really? A three-stack Forth with CSP hardware on each GPU core would
be quite a good fit.
>
That's sort of what the GA144 is...
>
The transputers were a better fit. I predict there time will come.
>
I predict that the time of systolic arrays will come. The nodes will not
be transputers, but they will share similar characteristics.
Today's AI is based on neural networks, but mathematically described in
terms of linear algebra. Matrix descriptions are also used for deep
learning
and back-propagation. So there is a split between network topology and
matrix topology (e.g. to keep Bayesian correlations of parameters). This
has a negative impact on memory footprint and power consumption.
By far most correlations occur between neighbouring parameters (which is
natural in artificial or biological networks). This is the reason why
most sparse AI matrices show significant correlations only near their
diagonals. In other words, the biggest part of such matrices are ballast
because they do not contribute to the solution.
Systolic arrays describe networks much better. In addition they provide
an
efficient way to store data directly in the nodes, eliminating the need
to shuffle data around to storage devices. I guess the math is just not
there yet. Time will tell.