Sujet : Re: Windows-on-ARM Laptop Is A “Frequently-Returned Item” On Amazon
De : bowman (at) *nospam* montana.com (rbowman)
Groupes : alt.comp.os.windows-11 comp.os.linux.advocacyDate : 26. Mar 2025, 04:46:00
Autres entêtes
Message-ID : <m4hbjnF5fa3U3@mid.individual.net>
References : 1 2 3 4 5 6 7 8 9 10 11
User-Agent : Pan/0.160 (Toresk; )
On Tue, 25 Mar 2025 08:26:15 -0000 (UTC), Chris wrote:
Yes, CUDA is the dominant interface, but not the only game in town.
There are other NPUs that can give nVidia a run for its money.
I've seen talk of opening up the CUDA API but I expect to be snowshoeing
in hell first. OpenCL isn't ready for prime time yet.
Sure, but there's a whole spectrum of needs for deep learning methods
that are far more modest and still very useful.
That's where my interests lie, edge ML applications, not the whole hyped
up LLM deal.
Machine learning has been around since the 1960s and has had real world
uses for a lot of that time.
That's a rather fluid term and if you count Hebb, since the '40s. I found
the concepts interesting in the '60s in the context of neurophysiology and
revited it in the '80s when Rumelhart and McClelland's book came out and
back propagation was introduced. The concepts were there but the computing
power wasn't.
Neural networks were over-promised and became a career killer and expert
systems became the stars. That didn't work out as planned, so move on to
fuzzy logic and so forth. Then neural networks were reborn but people
didn't want to call them that.
I created my first model in 2006/7 with no need for a GPU.
So have I with very small datasets like MNIST. No need except if you
wanted to measure the epochs with something other than a wall clock.