Sujet : Re: The LLMentalist Effect (AI & psychic's con)
De : anton.txt (at) *nospam* g{oogle}mail.com (Anton Shepelev)
Groupes : comp.miscDate : 12. Apr 2024, 11:49:52
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <20240412134952.391fb054793f0d1946a29ce6@g{oogle}mail.com>
References : 1
User-Agent : Sylpheed 3.7.0 (GTK+ 2.24.30; i686-pc-mingw32)
Ben Collver quoted:
<https://softwarecrisis.dev/letters/llmentalist/>
[...]
LLMs are not brains and do not meaningfully share any of
the mechanisms that animals or people use to reason or
think.
>
LLMs are a mathematical model of language tokens. You give
a LLM text, and it will give you a mathematically
plausible response to that text.
>
There is no reason to believe that it thinks or
reasons--indeed, every AI researcher and vendor to date
has repeatedly emphasised that these models don't think.
What say ye to:
1. LLM's can playing chess, that is they understand the
rules of the game, because all the training set is not
nearly sufficient if it were used merely
statistically:
<
https://parrotchess.com/>
2. Emergent World Models and Latent Variable Estimation
in Chess-Playing Language Models:
<
https://arxiv.org/abs/2403.15498>
I fear they cannot be explained by the Forer effect.
-- () ascii ribbon campaign -- against html e-mail/\ www.asciiribbon.org -- against proprietary attachments