Re: H(D,D) cannot even be asked about the behavior of D(D)

Liste des GroupesRevenir à s logic 
Sujet : Re: H(D,D) cannot even be asked about the behavior of D(D)
De : noreply (at) *nospam* example.com (joes)
Groupes : comp.theory
Date : 15. Jun 2024, 13:34:39
Autres entêtes
Organisation : i2pn2 (i2pn.org)
Message-ID : <v4ju8f$222a$1@i2pn2.org>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
User-Agent : Pan/0.145 (Duplicitous mercenary valetism; d7e168a git.gnome.org/pan2)
Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott:
On 6/14/2024 10:54 AM, joes wrote:
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:

When H and D have a pathological relationship to each other then
H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no
such pathological relationship thus D correctly simulated by H1 is the
behavior of D(D).
What is H1 asked?
H is asked whether its input halts, and by definition should give the
(right) answer for every input.
If we used that definition of decider then no human ever decided
anything because every human has made at least one mistake.
Yes. Humans are not machines.
I use the term "termination analyzer" as a close fit. The term partial
halt decider is more accurate yet confuses most people.
I have not seen you use that term before. You have not called it partial.
That was confusing.

D by construction is pathological to the supposed decider it is
constructed on. H1 can not decide D1. For every "decider" we can
construct an undecidable pathological program. No decider decides every
input.
Parroting what you memorized by rote is not very deep understanding.
This was my own phrasing. Can you explain the halting problem proof?
Understanding that the halting problem counter-example input that does
the opposite of whatever value the halt decider returns is merely the
Liar Paradox in disguise is a much deeper understanding.
I know that.

H(D,D) is not even being asked about the behavior of D(D)
It can't be asked any other way.
It can't be asked in any way what-so-ever because it is already being
asked a different question.
Is that question "Do you answer yes?"?

When H is a simulating halt decider you can't even ask it about the
behavior of D(D). You already said that it cannot map its input to
the behavior of D(D). That means that you cannot ask H(D,D) about
the behavior of D(D).
OF course you can, becaue, BY DEFINITION, that is the ONLY thing it
does with its inputs.
That definition might be in textbooks,
yet H does not and cannot read textbooks.
That is very confusing. H still adheres to textbooks.
No the textbooks have it incorrectly.


The only definition that H sees is the combination of its algorithm
with the finite string of machine language of its input.
H does not see its own algorithm, it only follows its internal
programming. A machine and input completely determine the behaviour,
whether that is D(D) or H(D, D).
No H (with a pathological relationship to D) can possibly see the
behavior of D(D).
That is not a problem with D, but with H not being total.

It is impossible to encode any algorithm such that H and D have a
pathological relationship and have H even see the behavior of D(D).
H literally gets it as input.
The input DOES NOT SPECIFY THE BEHAVIOR OF D(D).
The input specifies the behavior WITHIN THE PATHOLOGICAL RELATIONSHIP It
does not specify the behavior WITHOUT THE PATHOLOGICAL RELATIONSHIP.
There is no difference. If an H exists, it gives one answer. D then does
the opposite. H cannot change its answer. Other analysers can see that
H gives the wrong answer.

You already admitted there there is no mapping from the finite string
of machine code of the input to H(D,D) to the behavior of D(D).
Which means that H can't simulate D(D). Other machines can do so.
H cannot simulate D(D) for the same reason that int sum(int x, int y) {
return x + y; } sum(3,4) cannot return the sum of 5 + 6;

And note, it only gives definitive answers for SOME input.
It is my understanding is that it does this much better than anyone
else does. AProVE "symbolically executes the LLVM program".
Better doesn't cut it. H should work for ALL programs, especially for
D.
You don't even have a slight clue about termination analyzers.
Why do you say that? A (partial) termination analyser doesn't disprove
the halting problem.

H cannot be asked the question Does D(D) halt?
There is no way to encode that. You already admitted this when you
said the finite string input to H(D,D)
cannot be mapped to the behavior of D(D).
H answers that question for every other input.
The question "What is your answer/Is your answer right?" is pointless
and not even computed by H.
It is ridiculously stupid to think that the pathological relationship
between H and D cannot possibly change the behavior of D especially when
it has been conclusively proven that it DOES CHANGE THE BEHAVIOR OF D
D as a machine is completely specified and a valid Turing machine:
It asks a supposed decider if it halts, and then does the opposite,
making the decider wrong.
Other deciders than the one it calls can simulate or decide it.
D has exactly one fixed behaviour, like all TMs.
The behaviour of H should change because of the recursion, but it has to
make up its mind. D goes "I'm gonna do the opposite of what you said".

If you cannot even ask H the question that you want answered then this
is not an actual case of undecidability. H does correctly answer the
actual question that it was actually asked.
That would be the wrong question.
D(D) is a valid input. H should be universal.
Likewise the Liar Paradox *should* be true or false,
except for the fact that it isn't.

When H and D are defined to have a pathological relationship then H
cannot even be asked about the behavior of D(D).
H cannot give a correct ANSWER about D(D).
H cannot be asked the right question.
Then H would be faulty.

You can not simply wave your hands to get H to know what
question is being asked.
H doesn't need to know. It is programmed to answer a fixed question,
and the input completely determines the answer.
The fixed question that H is asked is:
Can your input terminate normally?
Does the input terminate, rather.
The answer to that question is: NO.
If that were so, this would be given to D, since it asks H about itself.
In this case, it would actually terminate. If H said Yes, it would go
into an infinite loop.

It can't even be asked. You said that yourself.
The input to H(D,D) cannot be transformed into the behavior of D(D).
It can, just not by H.
How crazy is it to expect a correct answer to a different question than
the one you asked?

No, we can't make an arbitrary problem solver, since we can show
there are unsolvable problems.
That is a whole other different issue.
The key subset of this is that the notion of undecidability is a ruse.
A ruse for what?
There are undecidable problems. Like halting.

Nothing says we can't encode the Halting Question into an input.
If there is no mapping from the input to H(D,D) to the behavior of
D(D) then H cannot possibly be asked about behavior that it cannot
possibly see.
It can be asked and be wrong.

What can't be done is create a program that gives the right answer
for all such inputs.
Expecting a correct answer to the wrong question is only foolishness.
The question is just whether D(D) halts.
Where do you disagree with the halting problem proof?
There are several different issues the key one of these issues [...]
is that there is something wrong with it along the lines of it being
isomorphic to the Liar Paradox.
"Something along the lines"? Can you point to the step where you disagree?

Thanks for your extended reply.

--
joes

Date Sujet#  Auteur
21 Sep 24 o 

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal