Sujet : Re: How could HHH report on the behavior of its caller?
De : richard (at) *nospam* damon-family.org (Richard Damon)
Groupes : comp.theoryDate : 15. May 2025, 11:58:53
Autres entêtes
Organisation : i2pn2 (i2pn.org)
Message-ID : <5cd51b943069c689ce125219be58d12908bcf936@i2pn2.org>
References : 1 2 3 4 5
User-Agent : Mozilla Thunderbird
On 5/14/25 11:42 PM, olcott wrote:
On 5/14/2025 10:14 PM, Richard Damon wrote:
On 5/14/25 11:02 PM, olcott wrote:
On 5/14/2025 9:54 PM, Richard Damon wrote:
On 5/14/25 9:30 PM, olcott wrote:
void DDD()
{
HHH(DDD);
return;
}
>
int main()
{
DDD();
}
>
If HHH cannot report on the behavior of its caller
because this is a ridiculous requirement then how
can HHH report on the direct execution of DDD()
(AKA its caller).
>
>
Because it is given the code of DDD, and thus doesn't need to know about "It caller"
>
>
Unless it does know about its caller
The only directly executed DDD() that actually exists
no HHH can possibly know about any directly executed DDD().
>
>
>
So, how does it know that?
>
How does that knowledge affect the answer?
>
The HHH relative to any directly executed DDD is
only HHH called by this DDD. It has always been
stupid to require HHH to report on its caller.
The requirement for a halt decider to report on
the direct execution of its input is proven to
be stupidly wrong whenever this input calls
this decider because no C function can report
on its caller.
All you are doing is proving you don't understand what you are talking about.
The "Directly Execution of DDD" is in one respect just a theoretical concept, we don't need to have actually executed it in the past, or even in the future. It is what that sequence of algorithmic steps, when/if we perform them, would do. Because those algorithmic steps are precisely and definitively defined, that behavior is fully determined, and that result is independent of if we actually run it or not.
It seems, this is just to abstract for your tiny mind. That something can have a true value even if we don't know it.
Thus, HHH is not being asked about its caller, its being asked about its input, which is a valid question, That input, if properly formed from the program in question, will fully define the algorithm that the program it represents will do, and thus fully specifies what that program will do when it is run.
This is one reason the DDD can't just be a non-leaf function, as such a thing isn't a program with all of its algorithm speccified, and thus doesn't have halting behavior, and we can't form the input to describe the algorithm if it isn't there in the program.
Yes, in the somewhat ill-defined system you are using, we could executed DDD, and it will start to try to use instructions that were not part of it, at which point we need to have defined what this means,
Either we say that DDD was strictly the code we said, and nothing else, at which point the "execution" of DDD just fails to be defined, as we violated or definitions of what DDD was.
Or we can say we allow it to access that memory, at which point to build up the representation of the algorithm of DDD, we needed to include the contents of that memory, or have stipulated the exact behavior that code will do (in effect, including that code in the axioms of your system). Both of these mean that you can't change that code, as it has been defined by your input or the axioms of the system.
Thus, since your HHH doesn't actually do a correct simulation of the input (since you show that in the universe where that was defined to be a non-aborting HHH, it will never answer, so can't be what you are now defining as that answers) and thus HHH answering about the correct simulation of its input by HHH is nonsense, refering to a condition that doesn't happen. It needs some memory locations to hold two different contents at once to make the one program HHH have two different behaviors,
Sorry, you are just showing how broken you logic actually is.