Re: A conversation with ChatGPT's brain.

Liste des GroupesRevenir à ca philosophy 
Sujet : Re: A conversation with ChatGPT's brain.
De : dnomhcir (at) *nospam* gmx.com (Richmond)
Groupes : comp.ai.philosophy
Date : 28. Apr 2025, 19:11:16
Autres entêtes
Organisation : Frantic
Message-ID : <86sels40mz.fsf@example.com>
References : 1 2 3 4 5 6 7 8
User-Agent : Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)
Doc O'Leary , <droleary.usenet@2023.impossiblystupid.com> writes:

Ha!  Blame the AI hype machine for making hallucination a
“meaningless” word.  Call it whatever you like, but the fact remains
that these programs give *incorrect answers* as part of their regular
operation.  It’s not a “bug” that occurs in certain conditions; it
really *is* “all output” that can be right or wrong, given with equal
confidence.

They use the term 'hallucination' for a particular circumstance. But
it's not just any circumstance where it gives a wrong answer. And
anyway, human beings give incorrect answers as part of their normal
operation too. The part that I disagree with is 'equal
confidence'. Searching the internet can give you wrong answers, and
takes much longer to do it, especially if you end up on Quora.

>
Don’t fool yourself into thinking chatbots are thinking.  If it isn’t
obvious that the people you talk to are thinking more than machines,
start hanging around smarter people.  They may challenge you to do
more thinking, too.  Win-win in my book.

I am not fooling myself into thinking it is thinking. And anyway, it
says it is not thinking. It describes how it operates. It looks up in
its database how LLM works, and spews it out. It has no understanding of
what it is saying. It is spewing out something it read somewhere. But
what's the difference? Do you know where your thoughts come from? Do you
ever have intuition and wonder how you knew?

I've watched this video of Andrej Kaparthy:

https://www.youtube.com/watch?v=7xTGNNLPyMI

But the end result is still amazing. I've used it to solve DIY problems
and to write bits of code.

Try asking ChatGPT: "How do I tell the difference between consciousness
and simulated consciousness?", then ask a human being, who will probably
say "Huh?"

Date Sujet#  Auteur
21 Apr 25 * A conversation with ChatGPT's brain.11Richmond
21 Apr 25 `* Re: A conversation with ChatGPT's brain.10Doc O'Leary ,
21 Apr 25  `* Re: A conversation with ChatGPT's brain.9Richmond
23 Apr 25   `* Re: A conversation with ChatGPT's brain.8Doc O'Leary ,
23 Apr 25    `* Re: A conversation with ChatGPT's brain.7Richmond
25 Apr 25     `* Re: A conversation with ChatGPT's brain.6Doc O'Leary ,
25 Apr 25      `* Re: A conversation with ChatGPT's brain.5Richmond
28 Apr 25       `* Re: A conversation with ChatGPT's brain.4Doc O'Leary ,
28 Apr 25        `* Re: A conversation with ChatGPT's brain.3Richmond
30 Apr 25         `* Re: A conversation with ChatGPT's brain.2Doc O'Leary ,
1 May 25          `- Re: A conversation with ChatGPT's brain.1Richmond

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal