Sujet : Re: DDD correctly emulated by HHH is correctly rejected as non-halting.
De : noreply (at) *nospam* example.org (joes)
Groupes : comp.theoryDate : 11. Jul 2024, 15:25:45
Autres entêtes
Organisation : i2pn2 (i2pn.org)
Message-ID : <ea8aa365d662f11cf1ae48d59cf9b7dd95d8edc8@i2pn2.org>
References : 1 2 3 4 5
User-Agent : Pan/0.145 (Duplicitous mercenary valetism; d7e168a git.gnome.org/pan2)
Am Thu, 11 Jul 2024 09:10:24 -0500 schrieb olcott:
On 7/11/2024 1:25 AM, Mikko wrote:
On 2024-07-10 17:53:38 +0000, olcott said:
On 7/10/2024 12:45 PM, Fred. Zwarts wrote:
Op 10.jul.2024 om 17:03 schreef olcott:
void DDD()
{
HHH(DDD);
}
int main()
{
HHH(DDD);
}
Unneeded complexity. It is equivalent to:
int main()
{
return HHH(main);
}
Every time any HHH correctly emulates DDD it calls the x86utm
operating system to create a separate process context with its own
memory virtual registers and stack, thus each recursively emulated DDD
is a different instance.
However, each of those instances has the same sequence of instructions
that the x86 language specifies the same operational meaning.
*That is counter-factual*
Contradicting yourself? "Counterfactual" usually means "if it were
different".
When DDD is correctly emulated by HHH according to the semantics of the
x86 programming language HHH must abort its emulation of DDD or both HHH
and DDD never halt.
If the recursive call to HHH from DDD halts, the outer HHH doesn't need
to abort. DDD depends totally on HHH; it halts exactly when HHH does.
Which it does, because it aborts.
When DDD is correctly emulated by HHH1 according to the semantics of the
x86 programming language HHH1 need not abort its emulation of DDD
because HHH has already done this.
Where does HHH figure into this? It is not the simulator here.
The behavior of DDD emulated by HHH1 is identical to the behavior of the
directly executed DDD().
At last!
-- Am Fri, 28 Jun 2024 16:52:17 -0500 schrieb olcott:Objectively I am a genius.