Liste des Groupes | Revenir à theory |
On 6/13/2024 10:44 PM, Richard Damon wrote:No, you just don't understand the proper meaning of "ask" when applied to a deterministic entity.On 6/13/24 11:14 PM, olcott wrote:No it is more than that.On 6/13/2024 10:04 PM, Richard Damon wrote:>On 6/13/24 9:39 PM, olcott wrote:>On 6/13/2024 8:24 PM, Richard Damon wrote:>On 6/13/24 11:32 AM, olcott wrote:>>>
It is contingent upon you to show the exact steps of how H computes
the mapping from the x86 machine language finite string input to
H(D,D) using the finite string transformation rules specified by
the semantics of the x86 programming language that reaches the
behavior of the directly executed D(D)
>
Why? I don't claim it can.
>
That means that H cannot even be asked the question:
"Does D halt on its input?"
WHy not? After all, H does what it does, the PERSON we ask is the programmer.
>
*When H and D have a pathological relationship to each other*
There is no way to encode any H such that it can be asked:
Does D(D) halt?
Which just pproves that Halting is non-computable.
>
H cannot even be asked the question:
Does D(D) halt?
You already admitted the basis for this.No, that is something different.
OF course you can, becaue, BY DEFIINITION, that is the ONLY thing it does with its inputs.You keep on doing that, Making claims that show the truth of the statement you are trying to disprove.When H is a simulating halt decider you can't even ask it
The fact you don't undrstand that, just show how little you understand what you are saying.
>>>
You must see this from the POV of H or you won't get it.
H cannot read your theory of computation textbooks, it
only knows what it directly sees, its actual input.
But H doesn't HAVE a "poimt of view".
>
about the behavior of D(D). You already said that it cannot
map its input to the behavior of D(D). That means that you
cannot ask H(D,D) about the behavior of D(D).
What seems to me to be the world's leading terminationAnd note, it only gives difinitive answers for SOME input.
analyzer symbolically executes its transformed input.
https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf
It takes C programs and translates them into something like
generic assembly language and then symbolically executes them
to form a directed graph of their behavior. x86utm and HH do
something similar in a much more limited fashion.
It is every time it is given an input, at least if H is a halt decider.H is just a "mechanical" computation. It is a rote algorithm that does what it has been told to do.H cannot be asked the question Does DD(D) halt?
>
There is no way to encode that. You already admitted
this when you said the finite string input to H(D,D)
cannot be mapped to the behavior of D(D).
No, YOU don't understand Truth.It really seems likem you just don't understand the concept of deterministic automatons, and Willful beings as being different.The issue is that you don't understand truthmaker theory.
>
Which just shows how ignorant you are about what you talk about.
>
You can not simply correctly wave your hands to get H to know
what question is being asked.
No, we can't make an arbitrary problem solver, since we can show there are unsolvable problems.It can't even be asked. You said that yourself.>>
If there is no possible way for H to transform its input
into the behavior of D(D) then H cannot be asked about
the behavior of D(D).
>
No, it says it can't do it, not that it can't be asked to do it.
>
The input to H(D,D) cannot be transformed into
the behavior of D(D).
>
Les messages affichés proviennent d'usenet.