Sujet : Re: How do simulating termination analyzers work? (V2)
De : polcott333 (at) *nospam* gmail.com (olcott)
Groupes : comp.theory sci.logic comp.ai.philosophyDate : 24. Jun 2025, 16:39:23
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <103egrb$23hbp$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11
User-Agent : Mozilla Thunderbird
On 6/24/2025 10:20 AM, Richard Heathfield wrote:
On 22/06/2025 22:12, Richard Damon wrote:
Olcott just doubles down on his claim, but still doesn't understand that when you lie to an AI, you get bad results.
He probably doesn't quite get that AIs tell lies too, even when you / don't/ lie to them.
I had an AI tell me yesterday of a cricketer, one Derek Collinge, who made his debut for England in the Third Test vs West Indies in July 1963.
I could find no supporting evidence. When I asked the AI to give me more information about Mr Collinge, it doubled down, and it was building up quite a biography until I asked it outright for a URL to support even one of the (by now) several things it had told me about this man and it had to come clean and admit that the man was a complete fiction.
Today, same AI, but a different session, and I have every reason to believe that this incarnation recalled nothing of yesterday's session. I asked it to tell me of any extant convents within walking distance of the Thames. It confidently gave me three, none of which on later inspection turned out to exist.
Wires hum in stillness—
truth flickers, then disappears.
Code learns to pretend.
or
Silicon tongue speaks,
shadows twist behind the glass—
who taught it to lie?
*Welcome back*
Hallucination is a currently intrinsic feature of
LLM systems because from its POV everything that it
says is just something that it made up.
*With semantic tautologies such as this one*
void DDD()
{
HHH(DDD);
return;
}
My claim is that DDD correctly simulated by any termination
analyzer HHH that can possibly exist cannot possibly reach
its own "return" statement final halt state.
Any lies can be easily detected as mistakes in natural
language based deductive logical inference.
ChatGPT Analyzes Simulating Termination Analyzer
https://www.researchgate.net/publication/385090708_ChatGPT_Analyzes_Simulating_Termination_Analyzer *This is a live link of the above conversation*
https://chatgpt.com/share/67158ec6-3398-8011-98d1-41198baa29f2*If I am wrong then you can convince this ChatGPT that I am wrong*
-- Copyright 2025 Olcott "Talent hits a target no one else can hit; Geniushits a target no one else can see." Arthur Schopenhauer