Sujet : Re: Proof that DDD specifies non-halting behavior --- point by point
De : news.dead.person.stones (at) *nospam* darjeeling.plus.com (Mike Terry)
Groupes : comp.theoryDate : 14. Aug 2024, 16:07:43
Autres entêtes
Message-ID : <XYucnXqdgeWiVSH7nZ2dnZfqn_adnZ2d@brightview.co.uk>
References : 1 2 3 4
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0 SeaMonkey/2.53.17
On 14/08/2024 08:43, joes wrote:
Am Tue, 13 Aug 2024 21:38:07 -0500 schrieb olcott:
On 8/13/2024 9:29 PM, Richard Damon wrote:
On 8/13/24 8:52 PM, olcott wrote:
A simulation of N instructions of DDD by HHH according to the
semantics of the x86 language is necessarily correct.
Nope, it is just the correct PARTIAL emulation of the first N
instructions of DDD, and not of all of DDD,
That is what I said dufuss.
You were trying to label an incomplete/partial/aborted simulation
as correct.
A correct simulation of N instructions of DDD by HHH is sufficient to
correctly predict the behavior of an unlimited simulation.
Nope, if a HHH returns to its caller,
*Try to show exactly how DDD emulated by HHH returns to its caller*
how *HHH* returns
(the first one doesn't even have a caller)
Use the above machine language instructions to show this.
HHH simulates DDD enter the matrix
DDD calls HHH(DDD) Fred: could be eliminated
HHH simulates DDD second level
DDD calls HHH(DDD) recursion detected
HHH aborts, returns outside interference
DDD halts voila
HHH halts
You're misunderstanding the scenario? If your simulated HHH aborts its simulation [line 5 above], then the outer level H would have aborted its identical simulation earlier. You know that, right? [It's what people have been discussing here endlessly for the last few months! :) ]
So your trace is impossible...
Mike.