Sujet : Re: A conversation with ChatGPT's brain.
De : dnomhcir (at) *nospam* gmx.com (Richmond)
Groupes : comp.ai.philosophyDate : 25. Apr 2025, 21:01:15
Autres entêtes
Organisation : Frantic
Message-ID : <86ldroypro.fsf@example.com>
References : 1 2 3 4 5 6
User-Agent : Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)
Doc O'Leary , <
droleary.usenet@2023.impossiblystupid.com> writes:
Again, *all* the output is hallucinations, whether you realize/notice it
or not. There is no mechanism for “thought” that allows it to distinguish
truth from fiction.
Ah, so you have redefined hallucination to mean all output from
LLM. It's rather meaningless to use the word then.