Re: A conversation with ChatGPT's brain.

Liste des GroupesRevenir à ca philosophy 
Sujet : Re: A conversation with ChatGPT's brain.
De : droleary.usenet (at) *nospam* 2023.impossiblystupid.com (Doc O'Leary ,)
Groupes : comp.ai.philosophy
Date : 30. Apr 2025, 23:11:27
Autres entêtes
Organisation : Subsume Technologies, Inc.
Message-ID : <vuu76e$16s40$1@dont-email.me>
References : 1 2 3 4 5 6
User-Agent : com.subsume.NNTP/1.0.0
For your reference, records indicate that
Richmond <dnomhcir@gmx.com> wrote:

They use the term 'hallucination' for a particular circumstance.

Then you’re going to have to share what that “particular circumstance” is,
because I’m not seeing it.  You input text, it outputs text.  That it.  As
part of generating the response, it will just make things up (toxic pizza toppings, fake legal cases, non-existent software libraries, etc.), leaving
you to sort out the mess. 

And
anyway, human beings give incorrect answers as part of their normal
operation too.

So what?  Just because humans can be wrong doesn’t mean LLMs get a pass for
the mistakes they make.  More importantly, the *types* of errors made are
very different.  It was something that was obviously a problem as far back
as when Watson was on Jeopardy.

The part that I disagree with is 'equal
confidence'.

And yet you offer up no evidence to the contrary.  You’re welcome to point
me to your favorite chatbot and it’ll probably take me all of 5 minutes to
get it to try to pass of an *obvious* lie as the truth.

Searching the internet can give you wrong answers, and
takes much longer to do it, especially if you end up on Quora.

That’s incoherent.  Are you just using a chatbot to try to refute my
points?  Regular searching *makes no claims of intelligence*, but what it
*does* do is accurately give you what it finds, possibly including
nothing.  It’s plenty fast, too.  Again, stop trying to push this into a
tangent about search; it’s about chatbots still not actually being good AI.

I am not fooling myself into thinking it is thinking.

You’re the one who started this thread by claiming that a chatbot “brain”
was outperforming humans.  You still don’t seem willing to acknowledge the
*massive* shortcomings such tools have.

It is spewing out something it read somewhere. But
what's the difference? Do you know where your thoughts come from? Do you
ever have intuition and wonder how you knew?

The question isn’t how I know what I know.  It’s what real value there is
in a chatbot that *cannot* know what it knows.  Just spewing out shit is
not a welcome interaction in my book, done by man *or* machine.

Try asking ChatGPT: "How do I tell the difference between consciousness
and simulated consciousness?", then ask a human being, who will probably
say "Huh?"

Again, find better humans to engage with if that’s your experience.

--
"Also . . . I can kill you with my brain."
River Tam, Trash, Firefly



Date Sujet#  Auteur
21 Apr 25 * A conversation with ChatGPT's brain.11Richmond
21 Apr 25 `* Re: A conversation with ChatGPT's brain.10Doc O'Leary ,
21 Apr 25  `* Re: A conversation with ChatGPT's brain.9Richmond
23 Apr 25   `* Re: A conversation with ChatGPT's brain.8Doc O'Leary ,
23 Apr 25    `* Re: A conversation with ChatGPT's brain.7Richmond
25 Apr 25     `* Re: A conversation with ChatGPT's brain.6Doc O'Leary ,
25 Apr 25      `* Re: A conversation with ChatGPT's brain.5Richmond
28 Apr 25       `* Re: A conversation with ChatGPT's brain.4Doc O'Leary ,
28 Apr 25        `* Re: A conversation with ChatGPT's brain.3Richmond
30 Apr 25         `* Re: A conversation with ChatGPT's brain.2Doc O'Leary ,
1 May 25          `- Re: A conversation with ChatGPT's brain.1Richmond

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal