Liste des Groupes | Revenir à s logic |
On 7/21/2025 3:31 AM, Fred. Zwarts wrote:As usual repeated claims without any new evidence, even though many errors in them have been pointed out earlier.Op 20.jul.2025 om 17:13 schreef olcott:On 7/20/2025 2:47 AM, Fred. Zwarts wrote:>Op 19.jul.2025 om 17:50 schreef olcott:>On 7/19/2025 2:50 AM, Fred. Zwarts wrote:>
No, the error in your definition has been pointed out to you many times.
When the aborting HHH is simulated correctly, without disturbance, it reaches the final halt state.
I could equally "point out" that all cats are dogs.
Counter-factual statements carry no weight.
Irrelevant.
You cannot prove that cats are dogs, but the simulation by world class simulators prove that exactly the same input specifies a halting program.
>>>*Best selling author of theory of computation textbooks*>>
This trivial C function is the essence of my proof
(Entire input to the four chat bots)
>
<input>
typedef void (*ptr)();
int HHH(ptr P);
>
void DDD()
{
HHH(DDD);
return;
}
>
int main()
{
HHH(DDD);
}
>
Termination Analyzer HHH simulates its input until
it detects a non-terminating behavior pattern. When
HHH detects such a pattern it aborts its simulation
and returns 0.
</input>
No rebuttal, but repeated counter-factual claims.
>>>
All of the chat bots figure out on their own that the input
to HHH(DDD) is correctly rejected as non-halting.
No, we see that the detection of non-termination is the input for the chat-box, not its conclusion.
>>>
https://chatgpt.com/c/687aa48e-6144-8011-a2be-c2840f15f285
*Below is quoted from the above link*
>
This creates a recursive simulation chain:
HHH(DDD)
-> simulates DDD()
-> calls HHH(DDD)
-> simulates DDD()
-> calls HHH(DDD)
-> ...
Wich is counter-factual, because we know that HHH aborts before this happens.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
If simulating halt decider H correctly simulates its
input D until H correctly determines that its simulated D
would never stop running unless aborted then
>
H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
</MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
>
>
Irrelevant empty claim. No H can correctly simulate itself up to the end. Since D calls H and we know that H halts, we know that a correct simulation would show that H returns to D, after which D halts.
So, D halts.
The prerequisites 'correctly simulates' and 'correctly determines' cannot be true, therefore the conclusion is irrelevant. It makes that Sipser agreed to a vacuous statement.
The correct measure of the behavior of the input to HHH(DDD)The HHH with bugs is not a correct measure for the behaviour specified in its input.
is DDD simulated by HHH according to the semantics of the C
programming language.
The behavior of the directly executed DDD() is not a correct
measure of the behavior of the input to HHH(DDD) because the
directly executed DDD() is not in the domain of HHH.
Both ChatGPT and Claude.ai demonstrate the equivalent ofIt only proves that chat-boxes generate nonsense when fed with nonsense.
complete understanding of this on the basis of their correct
paraphrase of my reasoning.
Although LLM systems are famous for hallucinations we
can see that this is not the case with their evaluation
of my work because their reasoning is sound.
It is a fact that Turing machine deciders cannot take
directly executed Turing machines as inputs.
It is a fact that the Halting Problem proofs require
a Turing machine decider to report on the behavior
of the direct execution of another Turing machine.
*That right there proves an error in the proof*
Les messages affichés proviennent d'usenet.