Sujet : Re: key error in all the proofs --- Mike's correction of Joes
De : polcott333 (at) *nospam* gmail.com (olcott)
Groupes : comp.theoryDate : 15. Aug 2024, 14:18:06
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v9kv6e$v95g$2@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
User-Agent : Mozilla Thunderbird
On 8/15/2024 2:01 AM, joes wrote:
Am Wed, 14 Aug 2024 16:08:34 -0500 schrieb olcott:
On 8/14/2024 3:56 PM, Mike Terry wrote:
On 14/08/2024 18:45, olcott wrote:
On 8/14/2024 11:31 AM, joes wrote:
Am Wed, 14 Aug 2024 08:42:33 -0500 schrieb olcott:
On 8/14/2024 2:30 AM, Mikko wrote:
On 2024-08-13 13:30:08 +0000, olcott said:
On 8/13/2024 6:23 AM, Richard Damon wrote:
On 8/12/24 11:45 PM, olcott wrote:
>
*DDD correctly emulated by HHH cannot possibly reach its* *own
"return" instruction final halt state, thus never halts*
>
Which is only correct if HHH actuallly does a complete and
correct emulation, or the behavior DDD (but not the emulation of
DDD by HHH)
will reach that return.
>
A complete emulation of a non-terminating input has always been a
contradiction in terms.
HHH correctly predicts that a correct and unlimited emulation of
DDD by HHH cannot possibly reach its own "return" instruction
final halt state.
>
That is not a meaningful prediction because a complete and
unlimited emulation of DDD by HHH never happens.
>
A complete emulation is not required to correctly predict that a
complete emulation would never halt.
What do we care about a complete simulation? HHH isn't doing one.
>
Please go read how Mike corrected you.
>
Lol, dude... I mentioned nothing about complete/incomplete
simulations.
*You corrected Joes most persistent error*
She made sure to ignore this correction.
Would you please point it out again?
I did in the other post.
But while we're here - a complete simulation of input D() would clearly
halt.
A complete simulation *by HHH* remains stuck in infinite recursion until
aborted.
Yes, HHH can't simulate itself completely. I guess no simulator can.
A simulating termination analyzer can correctly simulate
itself simulating an input that halts.
void DDD()
{
HHH(DDD);
return;
}
HHH correctly predicts that an unlimited emulation of
DDD by HHH would never reach the "return" instruction of DDD.
Termination analyzers / halt deciders are only required to correctly
predict the behavior of their inputs, thus the behavior of non-inputs is
outside of their domain.
The input is just the description of D, which halts if H aborts.
DDD emulated by HHH according to the semantics of the x86
language never reaches its own "return" instruction
whether or not HHH aborts this emulation at some point
or not, thus this DDD never halts.
The non-input would be if D called a non-aborting simulator,
because it is not being simulated by one that doesn't abort.
We only care about the recursive construction, not your implementation
of D that does NOT call its own simulator.
*This make the words you say below moot*
You have seen that yourself, e.g. with main() calling DDD(), or
UTM(DDD), or HHH1(DDD). [All of those simulate DDD to completion and
see DDD return. What I said earlier was that HHH(DDD) does not
simulate DDD to completion, which I think everyone recognises - it
aborts before DDD() halts.
-- Copyright 2024 Olcott "Talent hits a target no one else can hit; Geniushits a target no one else can see." Arthur Schopenhauer