Re: H(D,D) cannot even be asked about the behavior of D(D)

Liste des GroupesRevenir à theory 
Sujet : Re: H(D,D) cannot even be asked about the behavior of D(D)
De : richard (at) *nospam* damon-family.org (Richard Damon)
Groupes : comp.theory
Date : 15. Jun 2024, 01:27:42
Autres entêtes
Organisation : i2pn2 (i2pn.org)
Message-ID : <v4ijle$kqh$2@i2pn2.org>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
User-Agent : Mozilla Thunderbird
On 6/14/24 1:39 PM, olcott wrote:
On 6/14/2024 10:54 AM, joes wrote:
Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott:
On 6/14/2024 6:39 AM, Richard Damon wrote:
On 6/14/24 12:13 AM, olcott wrote:
On 6/13/2024 10:44 PM, Richard Damon wrote:
On 6/13/24 11:14 PM, olcott wrote:
On 6/13/2024 10:04 PM, Richard Damon wrote:
On 6/13/24 9:39 PM, olcott wrote:
On 6/13/2024 8:24 PM, Richard Damon wrote:
On 6/13/24 11:32 AM, olcott wrote:
>
H cannot even be asked the question: Does D(D) halt?
No, you just don't understand the proper meaning of "ask" when applied
to a deterministic entity.
When H and D have a pathological relationship to each other then H(D,D)
is not being asked about the behavior of D(D). H1(D,D) has no such
pathological relationship thus D correctly simulated by H1 is the
behavior of D(D).
H is asked whether its input halts, and by definition should give the
(right) answer for every input.
 If we used that definition of decider then no human ever decided
anything because every human has made at least one mistake.
But Humans are NOT deciders in the Computation Theory sense, because we don't run deterministic algorithms.
This seems to be part of your fundamental problem, yiu just don't know what you are talking about, and don't understand the difference between willful beings, and deterministic algorithms.

I use the term "termination analyzer" as a close fit. The term
partial halt decider is more accurate yet confuses most people.
A partial halt decider is a halt decider with a limited domain.
 
D by construction is pathological to the supposed decider it is
constructed on. H1 can not decide D1. For every "decider" we can construct
an undecidable pathological program. No decider decides every input.
>
 Parroting what you memorized by rote is not very deep understanding.
 Understanding that the halting problem counter-example input that
does the opposite of whatever value the halt decider returns is
merely the Liar Paradox in disguise is a much deeper understanding.
 
Can a correct answer to the stated question be a correct answer to the
unstated question?
H(D,D) is not even being asked about the behavior of D(D)
 >
It can't be asked any other way.
>
It can't be asked in any way what-so-ever because it is
already being asked a different question.
 
When H is a simulating halt decider you can't even ask it about the
behavior of D(D). You already said that it cannot map its input to the
behavior of D(D). That means that you cannot ask H(D,D) about the
behavior of D(D).
OF course you can, becaue, BY DEFINITION, that is the ONLY thing it
does with its inputs.
That definition might be in textbooks,
yet H does not and cannot read textbooks.
 >
That is very confusing. H still adheres to textbooks.
>
No the textbooks have it incorrectly.
 
The only definition that H sees is the combination of its algorithm with
the finite string of machine language of its input.
 
H does not see its own algorithm, it only follows its internal
programming. A machine and input completely determine the behaviour,
whether that is D(D) or H(D, D).
>
 No H (with a pathological relationship to D) can possibly see the behavior of D(D).
 
It is impossible to encode any algorithm such that H and D have a
pathological relationship and have H even see the behavior of D(D).
 >
H literally gets it as input.
>
 The input DOES NOT SPECIFY THE BEHAVIOR OF D(D).
The input specifies the behavior WITHIN THE PATHOLOGICAL RELATIONSHIP
It does not specify the behavior WITHOUT THE PATHOLOGICAL RELATIONSHIP.
 
You already admitted there there is no mapping from the finite string of
machine code of the input to H(D,D) to the behavior of D(D).
 >
Which means that H can't simulate D(D). Other machines can do so.
>
 H cannot simulate D(D) for the same reason that
int sum(int x, int y) { return x + y; }
sum(3,4) cannot return the sum of 5 + 6;
 
And note, it only gives difinitive answers for SOME input.
It is my understanding is that it does this much better than anyone else
does. AProVE "symbolically executes the LLVM program".
 >
Better doesn't cut it. H should work for ALL programs, especially for D.
>
 You don't even have a slight clue about termination analyzers.
 
H is just a "mechanical" computation. It is a rote algorithm that
does what it has been told to do.
H cannot be asked the question Does D(D) halt?
There is no way to encode that. You already admitted this when you
said the finite string input to H(D,D)
cannot be mapped to the behavior of D(D).
 >
H answers that question for every other input.
The question "What is your answer/Is your answer right?" is pointless
and not even computed by H.
>
 It is ridiculously stupid to think that the pathological
relationship between H and D cannot possibly change the
behavior of D especially when it has been conclusively
proven that it DOES CHANGE THE BEHAVIOR OF D
 
It is every time it is given an input, at least if H is a halt decider.
If you cannot even ask H the question that you want answered then this
is not an actual case of undecidability. H does correctly answer the
actual question that it was actually asked.
 >
D(D) is a valid input. H should be universal.
>
 Likewise the Liar Paradox *should* be true or false,
except for the fact that it isn't.
 
That is what halt deciders (if they exist) do.
When H and D are defined to have a pathological relationship then H
cannot even be asked about the behavior of D(D).
 >
H cannot give a correct ANSWER about D(D).
>
 H cannot be asked the right question.
 
It really seems likem you just don't understand the concept of
deterministic automata, and Willful beings as being different.
You can not simply correctly wave your hands to get H to know what
question is being asked.
H doesn't need to know. It is programmed to answer a fixed question,
and the input completely determines the answer.
>
 The fixed question that H is asked is:
Can your input terminate normally?
The answer to that question is: NO.
 
It can't even be asked. You said that yourself.
The input to H(D,D) cannot be transformed into the behavior of D(D).
It can, just not by H.
>
 How crazy is it to expect a correct answer to a
different question than the one you asked?
 
No, we can't make an arbitrary problem solver, since we can show there
are unsolvable problems.
That is a whole other different issue.
The key subset of this is that the notion of undecidability is a ruse.
A ruse for what?
>
Nothing says we can't encode the Halting Question into an input.
If there is no mapping from the input to H(D,D) to the behavior of D(D)
then H cannot possibly be asked about behavior that it cannot possibly
see.
It can be asked and be wrong.
>
What can't be done it create a program that gives the right answer for
all such inputs.
Expecting a correct answer to the wrong question is only foolishness.
The question is just whether D(D) halts.
>
Where do you disagree with the halting problem proof?
>
 There are several different issues the key one of these issues
that two PhD computer science professors agree with me on is
that there is something wrong with it along the lines of it
being isomorphic to the Liar Paradox.
 

Date Sujet#  Auteur
10 Nov 24 o 

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal