Re: The philosophy of computation reformulates existing ideas on a new basis ---

Liste des GroupesRevenir à theory 
Sujet : Re: The philosophy of computation reformulates existing ideas on a new basis ---
De : richard (at) *nospam* damon-family.org (Richard Damon)
Groupes : comp.theory
Date : 01. Nov 2024, 00:08:49
Autres entêtes
Organisation : i2pn2 (i2pn.org)
Message-ID : <0cbdead00fd4ebddccb71e228e47f1fed1696ba4@i2pn2.org>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13
User-Agent : Mozilla Thunderbird
On 10/31/24 8:50 AM, olcott wrote:
On 10/31/2024 5:49 AM, Mikko wrote:
On 2024-10-29 14:35:34 +0000, olcott said:
>
On 10/29/2024 2:57 AM, Mikko wrote:
On 2024-10-29 00:57:30 +0000, olcott said:
>
On 10/28/2024 6:56 PM, Richard Damon wrote:
On 10/28/24 11:04 AM, olcott wrote:
On 10/28/2024 6:16 AM, Richard Damon wrote:
The machine being used to compute the Halting Function has taken a finite string description, the Halting Function itself always took a Turing Machine,
>
>
That is incorrect. It has always been the finite string Turing Machine
description of a Turing machine is the input to the halt decider.
There are always been a distinction between the abstraction and the
encoding.
>
Nope, read the problem you have quoted in the past.
>
>
Ultimately I trust Linz the most on this:
>
the problem is: given the description of a Turing machine
M and an input w, does M, when started in the initial
configuration qow, perform a computation that eventually halts?
https://www.liarparadox.org/Peter_Linz_HP_317-320.pdf
>
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>
Linz also makes sure to ignore that the behavior of ⟨Ĥ⟩ ⟨Ĥ⟩
correctly simulated by embedded_H cannot possibly reach
either ⟨Ĥ.qy⟩ or ⟨Ĥ.qn⟩ because like everyone else he rejects
simulation out of hand:
>
We cannot find the answer by simulating the action of M on w,
say by performing it on a universal Turing machine, because
there is no limit on the length of the computation.
>
That statement does not fully reject simulation but is correct in
the observation that non-halting cannot be determied in finite time
by a complete simulation so someting else is needed instead of or
in addition to a partial simulation. Linz does include simulationg
Turing machines in his proof that no Turing machine is a halt decider.
>
>
*That people fail to agree with this and also fail to*
*correctly point out any error seems to indicate dishonestly*
*or lack of technical competence*
>
DDD emulated by HHH according to the semantics of the x86
language cannot possibly reach its own "return" instruction
whether or not any HHH ever aborts its emulation of DDD.
>
- irrelevant
 100% perfectly relevant within the philosophy of computation
 *THE TITLE OF THIS THREAD*
[The philosophy of computation reformulates existing ideas on a new basis ---]
 
- couterfactual
>
You can baselessly claim that verified facts are counter-factual
you cannot show this.
 _DDD()
[00002172] 55         push ebp      ; housekeeping
[00002173] 8bec       mov ebp,esp   ; housekeeping
[00002175] 6872210000 push 00002172 ; push DDD
[0000217a] e853f4ffff call 000015d2 ; call HHH(DDD)
[0000217f] 83c404     add esp,+04
[00002182] 5d         pop ebp
[00002183] c3         ret
Size in bytes:(0018) [00002183]
 If you don't even understand the x86 language and claim
that I am wrong that would make you a liar.
And, that system has UNDEFINED behavior per the semantics of the x86 language, as the code at 000015d2 has not be defined per the x86 language.
If you want it to be part of the input to HHH, then it needs to be accepted as part of the input (even if not list, accepted that the code that is there at this instance, is part of the input, and thus can not change).

 https://chatgpt.com/share/67158ec6-3398-8011-98d1-41198baa29f2
ChatGPT explains all of the details of how and why I am correct
and will vigorously argue against anyone that says otherwise.
Nope, it admits that you are wrong, when you tell it to forget your lies.

 The key to getting correct reasoning from ChatGPT is to exhaustively
explain all of the details of an algorithm and its input such that
your explanation and its analysis fits within 4000 words. When you go
over that limit it simply forgets key details and makes big mistakes.
But you lied to it, and so your arguement is based on lies.
You tell it that HHH actually simulates until it proves something, but then it doesn't actually prove that fact, but only show that it would seem to be true if it is assumed to be true.
Even YOU habe admitted that logic like that isn't allowed.

 ChatGPT totally understands Simulating termination analyzer HHH
apply to input DDD: (as proven by the above link)
 void DDD()
{
   HHH(DDD);
   return;
}
 ChatGPT gets overwhelmed by this same HHH applied to DD
int DD()
{
   int Halt_Status = HHH(DD);
   if (Halt_Status)
     HERE: goto HERE;
   return Halt_Status;
}
  

Date Sujet#  Auteur
15 Jul 25 o 

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal