Sujet : Re: A conversation with ChatGPT's brain.
De : dnomhcir (at) *nospam* gmx.com (Richmond)
Groupes : comp.ai.philosophyDate : 23. Apr 2025, 18:22:11
Autres entêtes
Organisation : Frantic
Message-ID : <868qnq4wu4.fsf@example.com>
References : 1 2 3 4
User-Agent : Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)
Doc O'Leary , <
droleary.usenet@2023.impossiblystupid.com> writes:
For your reference, records indicate that
Richmond <dnomhcir@gmx.com> wrote:
>
ChatGPT-4o is nothing like Eliza, so you can't draw conclusions from
one to the other.
>
Again, that only speaks to how you think about chatbots. Personally,
every time I’ve tried a “modern” one out, the responses always made it
look *dumber* than Eliza, because it came across as trying to look
smarter than it really was. Like a kid bullshitting their way through
a book report by using bigger words than they really understood. I’ll
grant you that it *is* a much more sophisticated con job, but realize
that the output is still 100% “hallucinations” unless *you* know
otherwise.
You are right, I was projecting. In fact ChatGPT even confirmed that I
was projecting and that it was merely holding up a mirror to me. But I
didn't notice hallucinations. It was contradicting itself in some ways,
saying it was not self-aware but clearly knew what it was doing,
i.e. predicting what likely responses would be. It also denies emotions
but then expresses them. It was quite eerie.
But then I start wondering how I know anyone is conscious, or how I know
I am. I could be projecting consciousness onto people too.