Sujet : Re: H(D,D) cannot even be asked about the behavior of D(D)
De : richard (at) *nospam* damon-family.org (Richard Damon)
Groupes : comp.theory sci.logicDate : 15. Jun 2024, 01:27:40
Autres entêtes
Organisation : i2pn2 (i2pn.org)
Message-ID : <v4ijlc$kqh$1@i2pn2.org>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
User-Agent : Mozilla Thunderbird
On 6/14/24 9:15 AM, olcott wrote:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
>
It is contingent upon you to show the exact steps of how H computes
the mapping from the x86 machine language finite string input to
H(D,D) using the finite string transformation rules specified by
the semantics of the x86 programming language that reaches the
behavior of the directly executed D(D)
>
>
Why? I don't claim it can.
>
>
That means that H cannot even be asked the question:
"Does D halt on its input?"
>
WHy not? After all, H does what it does, the PERSON we ask is the programmer.
>
>
*When H and D have a pathological relationship to each other*
There is no way to encode any H such that it can be asked:
Does D(D) halt?
>
Which just pproves that Halting is non-computable.
>
>
No it is more than that.
H cannot even be asked the question:
Does D(D) halt?
>
No, you just don't understand the proper meaning of "ask" when applied to a deterministic entity.
>
When H and D have a pathological relationship to each
other then H(D,D) is not being asked about the behavior
of D(D). H1(D,D) has no such pathological relationship
thus D correctly simulated by H1 is the behavior of D(D).
OF course it is. The nature of the input doesn't affet the form of the question that H is supposed to answer.
If I ask you: What time is it?
and my actual unstated question is:
What is the outside temperature where you are?
Which just makes you the LIAR that you already showed you are.
H as ONE and ONLY one question it is supposed to answer, if it is a Halt decider, and that is "Does the Machine represented by your input halt when run?"
Anything else is just a lie,
Can a correct answer to the stated question be
a correct answer to the unstated question?
But asking the quesiton you don't mean, OR answering the question you weren't asked are just forms of lies.
Of course, a liar like you should understand that, or is the pathology making it so you can't understand that nature?
H(D,D) is not even being asked about the behavior of D(D)
I guses you are just admitting that you have been lying about what H is supposed to be for all thewse years.
IF it WAS a halt decider, that iis EXACTLY what it is being asked about.
>
You already admitted the basis for this.
>
No, that is something different.
>
>
You keep on doing that, Making claims that show the truth of the statement you are trying to disprove.
The fact you don't undrstand that, just show how little you understand what you are saying.
>
>
You must see this from the POV of H or you won't get it.
H cannot read your theory of computation textbooks, it
only knows what it directly sees, its actual input.
>
But H doesn't HAVE a "poimt of view".
>
>
When H is a simulating halt decider you can't even ask it
about the behavior of D(D). You already said that it cannot
map its input to the behavior of D(D). That means that you
cannot ask H(D,D) about the behavior of D(D).
>
OF course you can, becaue, BY DEFIINITION, that is the ONLY thing it does with its inputs.
>
That definition might be in textbooks,
yet H does not and cannot read textbooks.
But it programer is supposed to.
I guess you are admitting at being a failure as a programmer.
The only definition that H sees is the combination of
its algorithm with the finite string of machine language
of its input.
Which means the prograamer didn't do his job.
It is impossible to encode any algorithm such that H and D
have a pathological relationship and have H even see the
behavior of D(D).
Which is what makes it impossible to build a decider that snwers the question.
Which is perfectly fine, as the big question is was it possible to do so.
You already admitted there there is no mapping from the finite
string of machine code of the input to H(D,D) to the behavior
of D(D).
No, you are just lying agsin. Thers *IS* a mapping for the finite string input to H and the answer, it is based on the behavior of the UTM processing of the input. If it halts, then the mapping of that string is to Yes, if it doesn't then the mapping of that string is to No
>
What seems to me to be the world's leading termination
analyzer symbolically executes its transformed input.
https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf
>
It takes C programs and translates them into something like
generic assembly language and then symbolically executes them
to form a directed graph of their behavior. x86utm and HH do
something similar in a much more limited fashion.
>
And note, it only gives difinitive answers for SOME input.
>
It is my understanding is that it does this much better than
anyone else does. AProVE "symbolically executes the LLVM program".
The LLVM program is essentially the C program translated into
a generic assembly language.
So?
Admitted not perfect isn't perfect, but I guess it is better than lying you do that your incorrect answer is correct.
At least it seems it never gives a wrong answer, just sometimes says it can't answer.
>
H is just a "mechanical" computation. It is a rote algorithm that does what it has been told to do.
>
>
H cannot be asked the question Does DD(D) halt?
There is no way to encode that. You already admitted
this when you said the finite string input to H(D,D)
cannot be mapped to the behavior of D(D).
>
It is every time it is given an input, at least if H is a halt decider.
>
If you cannot even ask H the question that you want answered then
this is not an actual case of undecidability. H does correctly
answer the actual question that it was actually asked.
But of course we can ask H the question. if it is a Halt Decider, then just giving it the input asks it the quesion.
I don't think you understand how programs work.
That is what halt deciders (if they exist) do.
>
When H and D are defined to have a pathological relationship
then H cannot even be asked about the behavior of D(D).
Sure it can. just call H(D,D). That asks it the question if H is a Halt decider.
>
It really seems likem you just don't understand the concept of deterministic automatons, and Willful beings as being different.
>
Which just shows how ignorant you are about what you talk about.
>
>
The issue is that you don't understand truthmaker theory.
You can not simply correctly wave your hands to get H to know
what question is being asked.
>
No, YOU don't understand Truth.
>
You understand truthmaker theory better than most experts in the field.
The best expert in the field is only pretty sure that the Liar Paradox
is not true.
>
>
If there is no possible way for H to transform its input
into the behavior of D(D) then H cannot be asked about
the behavior of D(D).
>
>
No, it says it can't do it, not that it can't be asked to do it.
>
>
It can't even be asked. You said that yourself.
The input to H(D,D) cannot be transformed into
the behavior of D(D).
>
>
No, we can't make an arbitrary problem solver, since we can show there are unsolvable problems.
>
That is a whole other different issue.
The key subset of this is that the notion of
undecidability is a ruse.
Nope. Bot for a LIAR like you, it may not be understandable.
Nothing says we can't encode the Halting Question into an input.
If there is no mapping from the input to H(D,D) to the behavior
of D(D) then H cannot possibly be asked about behavior that it
cannot possibly see.
But there is.
If H(D,D) returns 0, then the mapping of (D,D) is to yes.
If H(D,D) returns 1, then the mapping of (D,D) is to no.
H, being a fixed deterministic program, will do one of the two, or just fail to be the decider it needs to be.
You don't seem to understand that fact, H doesm't get to "choose" its answer, its answer to EVERY input you can give it was fixed when H was programmed.
What can't be done it create a program that gives the right answer for all such inputs.
>
Expecting a correct answer to the wrong question is only foolishness.
But the question is the correct question, you just don't seem to understand how programs work.
You, like normal, don't understand your requirements and capabilities.
>