Sujet : Re: A conversation with ChatGPT's brain.
De : droleary.usenet (at) *nospam* 2023.impossiblystupid.com (Doc O'Leary ,)
Groupes : comp.ai.philosophyDate : 28. Apr 2025, 17:55:56
Autres entêtes
Organisation : Subsume Technologies, Inc.
Message-ID : <vuobus$3o914$1@dont-email.me>
References : 1 2 3 4 5
User-Agent : com.subsume.NNTP/1.0.0
For your reference, records indicate that
Richmond <
dnomhcir@gmx.com> wrote:
Doc O'Leary , <droleary.usenet@2023.impossiblystupid.com> writes:
Again, *all* the output is hallucinations, whether you realize/notice it
or not. There is no mechanism for “thought” that allows it to distinguish
truth from fiction.
Ah, so you have redefined hallucination to mean all output from
LLM. It's rather meaningless to use the word then.
Ha! Blame the AI hype machine for making hallucination a “meaningless”
word. Call it whatever you like, but the fact remains that these programs
give *incorrect answers* as part of their regular operation. It’s not a
“bug” that occurs in certain conditions; it really *is* “all output” that
can be right or wrong, given with equal confidence.
Don’t fool yourself into thinking chatbots are thinking. If it isn’t
obvious that the people you talk to are thinking more than machines, start
hanging around smarter people. They may challenge you to do more
thinking, too. Win-win in my book.
-- "Also . . . I can kill you with my brain."River Tam, Trash, Firefly