Sujet : Re: HHH maps its input to the behavior specified by it --- never reaches its halt state
De : richard (at) *nospam* damon-family.org (Richard Damon)
Groupes : comp.theoryDate : 09. Aug 2024, 03:52:27
Autres entêtes
Organisation : i2pn2 (i2pn.org)
Message-ID : <f37108f5c9868fc309f42ef78982e2c865ad544c@i2pn2.org>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13
User-Agent : Mozilla Thunderbird
On 8/8/24 9:15 AM, olcott wrote:
On 8/8/2024 3:24 AM, Fred. Zwarts wrote:
Op 07.aug.2024 om 15:01 schreef olcott:
On 8/7/2024 3:16 AM, Fred. Zwarts wrote:
Op 04.aug.2024 om 15:11 schreef olcott:
On 8/4/2024 1:26 AM, Fred. Zwarts wrote:
Op 03.aug.2024 om 17:20 schreef olcott:>>
When you try to show how DDD simulated by HHH does
reach its "return" instruction you must necessarily
must fail unless you cheat by disagreeing with the
semantics of C. That you fail to have a sufficient
understanding of the semantics of C is less than no
rebuttal what-so-ever.
>
Fortunately that is not what I try, because I understand that HHH cannot possibly simulate itself correctly.
>
>
void DDD()
{
HHH(DDD);
return;
}
>
In other words when HHH simulates itself simulating DDD it
is supposed to do something other than simulating itself
simulating DDD ??? Do you expect it to make a cup of coffee?
>
>
Is English too difficult for you. I said HHH cannot do it correctly.
>
*According to an incorrect criteria of correct*
You keep trying to get away with disagreeing with
the semantics of the x86 language. *That is not allowed*
>
Again accusations without evidence.
We proved that HHH deviated from the semantics of the x86 language by skipping the last few instructions of a halting program.
void DDD()
{
HHH(DDD);
return;
}
Each HHH of every HHH that can possibly exist definitely
*emulates zero to infinity instructions correctly* In
none of these cases does the emulated DDD ever reach
its "return" instruction halt state.
*There are no double-talk weasel words around this*
*There are no double-talk weasel words around this*
*There are no double-talk weasel words around this*
There is no need to show any execution trace at the x86 level
every expert in the C language sees that the emulated DDD
cannot possibly reaches its "return" instruction halt state.
Every rebuttal that anyone can possibly make is necessarily
erroneous because the first paragraph is a tautology.
Nope, it is a lie based on comfusing the behavior of DDD which is what "Halting" is.
Remember, the definition of "Halting" is THE PROGRAM reaching a final state. I will repeat that, it is THE PROGRAM reachibg a final state.
A Program, it the COMPLETE collection of ALL the instructions possible used in the execution of it, and thus, the PROGRAM DDD, includes the instructions of HHH as part of it, so when you pair different HHHs to the C function DDD, to get programs, each pairing is a DIFFERENT input.
Also, to be a valid input to decide on, it must contain all the information needed, and thus your version with just the bytes of the C function is NOT a valid input to decide on, and any claim based on that would just be a lie.
Now, when we look at your claimed about DDD correctly emulated for only a finite number of steps, and remembering that Halting is based on the behavior of the FULL program, the partial emulation does NOT define the behavior of that DDD. We CAN look at a Complete and Correct Emulation, which would be given the exact same input to the version of HHH that never aborts, and since the pairing of DDD to HHH creates a different DDD for each input, that means that we don't have this non-aborting HHH look at the DDD that calls the n\on-aborting HHH, but the DDD that calls the HHH that did abort after the finite number of steps. (And if you can't build that test, you are just proving your system is not Turing Complete, and thus not suitable for trying to use for the halting problem.
This emulation, will, BY DEFINITION, see exactly what the dirrect execution of DDD does (or even your non-aborting HHH doesn't correctly emulate its input), will see DDD call HHH, then, by your definition, that HHH doing some emulation, and then after that finite number of steps emulated, will abort its emulation and return to DDD and that DDD will reach its final state and be halting
THus, we have just proved that for every HHH that emulates from 0 to a arbitrary large finite number of steps of DDD correctly, and then return 0, while its emulation doesn't reach the final return instruction, the COMPLETE CORRECT emulation of the same input does, as does the direct execution of the machine that the input represents.
Thus, your claimed "tautology" is a incorrect statement for all but one case, that of an HHH that emulates an INFINITE number of steps correctly, but that HHH can never answer about the behavior of its input, so is not a correct halt decider either.