Sujet : Re: A conversation with ChatGPT's brain.
De : droleary.usenet (at) *nospam* 2023.impossiblystupid.com (Doc O'Leary ,)
Groupes : comp.ai.philosophyDate : 25. Apr 2025, 19:00:12
Autres entêtes
Organisation : Subsume Technologies, Inc.
Message-ID : <vugijc$hbs5$1@dont-email.me>
References : 1 2 3 4
User-Agent : com.subsume.NNTP/1.0.0
For your reference, records indicate that
Richmond <
dnomhcir@gmx.com> wrote:
In fact ChatGPT even confirmed that I
was projecting
No, it didn’t. It just continued its confidence game.
But I
didn't notice hallucinations.
Again, *all* the output is hallucinations, whether you realize/notice it
or not. There is no mechanism for “thought” that allows it to distinguish
truth from fiction. You just get some mashup of the training data which
you are left to sort out for yourself.
clearly knew what it was doing,
No, it didn’t!
It also denies emotions
but then expresses them.
Just empty words.
It was quite eerie.
It shouldn’t be. As I said, I find it quite disappointing how bad these
chatbots still are given the sheer scale of resources that get shoveled
into them.
But then I start wondering how I know anyone is conscious, or how I know
I am. I could be projecting consciousness onto people too.
There certainly are some root epistemological questions we all need to
grapple with. But looking to chatbots for help with that is barking up
the wrong tree.
-- "Also . . . I can kill you with my brain."River Tam, Trash, Firefly