Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

Liste des GroupesRevenir à s logic 
Sujet : Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
De : richard (at) *nospam* damon-family.org (Richard Damon)
Groupes : comp.theory sci.logic
Date : 08. Mar 2024, 05:46:28
Autres entêtes
Organisation : i2pn2 (i2pn.org)
Message-ID : <use1mk$15q44$5@i2pn2.org>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
User-Agent : Mozilla Thunderbird
On 3/7/24 7:18 PM, olcott wrote:
On 3/7/2024 8:10 PM, Richard Damon wrote:
On 3/7/24 5:52 PM, olcott wrote:
On 3/7/2024 7:34 PM, immibis wrote:
On 7/03/24 18:36, olcott wrote:
On 3/7/2024 9:50 AM, immibis wrote:
On 7/03/24 16:38, olcott wrote:
On 3/7/2024 5:44 AM, Mikko wrote:
On 2024-03-06 17:08:25 +0000, olcott said:
>
On 3/6/2024 3:06 AM, Mikko wrote:
On 2024-03-06 07:11:34 +0000, olcott said:
>
Chat GPT CAN'T understand the words, it has no programming about MEANING.
>
You cant find any mistakes in any of its reasoning.
>
*This paragraph precisely follows from its preceding dialogue*
>
When an input, such as the halting problem's pathological input D, is
designed to contradict every value that the halting decider H returns,
it creates a self-referential paradox that prevents H from providing a
consistent and correct response. In this context, D can be seen as
posing an incorrect question to H, as its contradictory nature
undermines the possibility of a meaningful and accurate answer.
>
That is essentially an agreement with Linz proof.
*It is not an agreement with the conclusion of this proof*
>
Not explicitly but comes close enough that the final step is
trivial.
>
It is an agreement with why Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ gets the wrong answer.
>
That, too.
>
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>
The Linz proof correctly proves that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩
can't possibly get the right answer and falsely
concludes that this means that H ⟨Ĥ⟩ ⟨Ĥ⟩ cannot
get the correct answer.
>
*My H(D,D) and H1(D,D) prove otherwise*
>
>
An embedded copy of a machine is stipulated to always get the same result as the original machine.
>
*Until one carefully examines the proof that this is false*
>
>
The details of the proof are specified for Turing machines. To make it work for Olcott machines, we have to change the details. But it still works.
>
>
No matter how much Ĥ ⟨Ĥ⟩ can screw itself up it still must either
halt or fail to halt and H ⟨Ĥ⟩ ⟨Ĥ⟩ can see this.
>
Nope. Because if H tries to keep on simulating to find the answer, it might simulate forever and never get to give the answer.
>
If it stops before the H^.H makes its decision (as it must) then it doesn't know that H^ will do.
>
The problem is that since H^ is using the algorithm in H, it can know the answer that H will give and do the opposite (if H does stop to give an answer, and if it doesn't it has already failed).
 An Olcott machine H (exact same TMD as the Turing machine H)
H ⟨Ĥ⟩ ⟨Ĥ⟩ <H> can trivially determine that *IT IS NOT* calling
itself in recursion simulation.
 An Olcott machine Ĥ (exact same TMD as the Turing machine Ĥ)
Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ <Ĥ> can trivially determine that *IT IS* calling
itself in recursion simulation.
First, the H^ for Olcott machines will be DIFFERENT then the H^ for Turing Machines due to your semantic differences in the machines.
That is allowed, because the purpose of H^ is to show that an input can be created that confounds the decider, and that confounding is based on a simple semantic principle which will be implemented differently for different models of computation.
That principle is: Using an exact copy of the decider we are to confound, called in exactly the inputs as the decider will be used with, find out what the decider will decide for use and do the opposite.
This means the front end will change depending on the rules of how machines are used and get their inputs.
And the back end will change depending on how machines give their answer,
So the copy of H at H^.H, since with Computations, the algorithms can only depend on the actual inputs, can be given the exact same input as the top level H gets.
Remember, the rest of H^ is antagonistic to H, and is working to prove it wrong, so you can't try to say that it COULD detect the condition, because it isn't designed to do that.
H^ isn't designed to "Get the right answer", it is designed to make H get the WRONG answer.

 Turing machines cannot possibly do that because they inherently
have no self-awareness.
 
Right, ALL computation engines are "mechanical" in operation, having a fixed set of instructions that do exactly as they are programmed to do. And that includes your H, even as a Olcott-Machine.

Date Sujet#  Auteur
8 Mar 24 * Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?2olcott
8 Mar 24 `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?1Richard Damon

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal