Liste des Groupes | Revenir à col misc |
On 12/20/24 12:19 PM, rbowman wrote:I've also thought about this approach. We have OCR, we have voice recognition, we have chess, we have diagnosing cancers (OCR), we have parrots (LLMs) etc.On Fri, 20 Dec 2024 01:31:30 -0500, 186282@ud0s4.net wrote:>
>It's the 'hand-wave' thing that sunk the first AI paradigm.Minsky threw a wrench in the works with his 9169 'Perceptrons'. He had
Marv Minsky (who posted on usenet for awhile) and friends saw how
easily 'decisions' could be done with a transistor or two and assumed
it would thus be easy to build an AI. AC Clarke used the Minsky
optimism when fashioning the idea of "HAL".
tried to implement B. F. skinner's operant condition with a analog lashup
that sort of worked if the vacuum tubes didn't burn out. Rosenblatt has
built a 'Perceptron' and Minsky pointed out original design couldn't
handle an XOR. That sent research down another rabbit hole.
By the '80s the original perceptron had evolved into a multilayer network
train by back propagation. When I played around with it 'Parallel
Distributed Processing' by Rumelhart and McClelland was THE book.
https://direct.mit.edu/books/monograph/4424/Parallel-Distributed-
Processing-Volume
The ideas were fascinating but the computing power wasn't there. Most of
what I learned then is still relevant to TensorFlow and the other neural
network approaches except now there are the $30,000 Nvidia GPUs to do the
heavy lifting.
The '80s neural networks weren't practical so the focus shifted to expert
systems until they petered out. The boom and bust cycles led to the term
'AI Winter'
https://www.techtarget.com/searchenterpriseai/definition/AI-winter
I think something worthwhile will come from this cycle but ultimately it
won't be the LLMs that are getting all the hype.
With Minsky and friends it was just naive enthusiasm ...
it was SO EASY to do logic and thus it seemed SO EASY
to wire bits of it together and get an 'intelligence'.
>
The same gen also promised us those flying cars and
luxury Mars living by 1999 .......
>
IMHO, if we're gonna get anything largely indistinguishable
from 'sentience' these days it'll be the next few gens of
LLMs. You can argue it'd be "fake" - but if you fake something
WELL ENOUGH it's not fake anymore. LLMs and near derivs are
where the HUGE money is these days.
>
I did have a few posts with Minsky as his vision was
falling apart. He did admit that he'd totally underestimated
the problem. A few transistors did NOT replace 600 million
years of evolutionary experiments - 'intelligence'/'self'
was really deep/complex with endless fuzzy processing and
pattern matching steps between 'I' and 'O'.
>
However I still keep a copy of his "Society Of Mind"
as a reminder of yesterday's optimism. He THOUGHT
about it, TRIED ... and thus eventual failure was
not really a failure - it just inspired new directions.
There had to be a foundation to build on.
>
There was a short-lived UK series about androids
that eventually came to self-awareness (and the
hate/fear directed towards them). The idea there
was that 'self' was a sort of fractal, self-reflective,
kind of paradigm. I suspect they had something there.
Chat/LLMs maybe can't achieve that on their own, but
who says you can't splice on a few more methods ?
Organic brains seem to have LOTS of layers, lots
of 'little people' inside that merge into 'Me'.
Les messages affichés proviennent d'usenet.