Sujet : Re: How do simulating termination analyzers work? (V2)
De : richard (at) *nospam* damon-family.org (Richard Damon)
Groupes : comp.theory sci.logic comp.ai.philosophyDate : 22. Jun 2025, 22:12:35
Autres entêtes
Organisation : i2pn2 (i2pn.org)
Message-ID : <6bb19b2931aea9154e53ea2696b7b50c7944eee4@i2pn2.org>
References : 1 2 3 4 5 6 7 8 9
User-Agent : Mozilla Thunderbird
Olcott just doubles down on his claim, but still doesn't understand that when you lie to an AI, you get bad results.
On 6/22/25 11:00 AM, olcott wrote:
On 6/22/2025 6:25 AM, Richard Damon wrote:
On 6/21/25 5:23 PM, olcott wrote:
On 6/21/2025 3:48 PM, Mr Flibble wrote:
On Sat, 21 Jun 2025 10:26:00 -0500, olcott wrote:
>
I want to know exactly how you feed this to ChatGPT.
>
What you want and what you get are two different things.
>
/Flibble
>
When I totally explained the notion of a simulating halt
decider HHH(DDD) ChatGPT understood it so well that it
can and will show the mistake of any possible rebuttal
in the live link posted below.
>
https://chatgpt.com/share/67158ec6-3398-8011-98d1-41198baa29f2
*My explanation had to be airtight and totally self-contained*
>
>
Ad you LIED when you said:
>
Termination Analyzer HHH simulates its input until
it detects a non-terminating behavior pattern.
>
Because the pattern you detect exists in terminating programs, and thus is NOT a non-terminating behavior pattern.
>
I have much more elaborate ChatGPT traces that explain
how you are wrong here. ChatGPT does a much better job
of explaining that than I have.
https://chatgpt.com/c/6857278b-b748-8011-8e3b-d9707acc5971
The above is my original question about HHH(DDD) when
ChatGPT has a 4000 token limit. Back then is got very
confused about my DD proof. Now with a 128,000 token limit
it immediately understood my HHH(DD) proof.
This new one did not agreed that I refuted the HP proof yet
agreed that I did refute the most common self referential
proof technique.
Only because you almost certainly still included your lie about what HHH does.
AIs do not detect lies in their input when it is given as a premise, but just form wrong conclusions from it.
Of course, part of the problem is you don't understand what behavior means, or what is a program.
>
Thus, your whole world is based on lies.