Liste des Groupes | Revenir à s logic |
On 7/20/2025 6:13 AM, Richard Damon wrote:WRONG. (Full details in another post)On 7/19/25 11:20 PM, olcott wrote:Unlike a halt decider that must be correctOn 7/19/2025 9:12 PM, Richard Damon wrote:>On 7/19/25 5:18 PM, olcott wrote:>On 7/19/2025 4:00 PM, Alan Mackenzie wrote:>Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:>
>
[ .... ]
>ps. learn to post more respectfully.>
You've hit the nail on the head, there. Peter Olcott doesn't show
respect here for anybody. Because of this he isn't shown any respect
back - he hasn't earned any. I don't think he understands the concept
of respect any more than he understands the concept of truth.
>
If he were to show repect, he'd repect knowledge, truth, and learning,
and strive to acquire these qualities. Instead he displays contempt for
them. This is a large part of what makes him a crank. It is
a large part of what makes it such a waste of time trying to correct
him, something that you've sensibly given up.
>
Now that chat bots have proven that they understand
what I am saying I can rephrase my words to be more
clear.
>
They have done no such thing, because they can't
>
Since yoiu feed them lies, all you have done is shown that you think lies are valid logic.
>I have been rude because I cannot interpret the>
rebuttal to this statement as anything besides
a despicable lie for the sole purpose of sadistic
pleasure of gaslighting:
Because you are just too stupid.
>
How is the "pattern" that HHH detects a non-halting pattern, when non- halting is DEFINED by the behavior of the directly executed machine, and the pattern you are thinking of exists in the execution of the DDD that halts because it was built on the same HHH you claim is correct to return 0,
>
Thus, your claim *IS* just a lie, and you shows your ignorance by saying you can't undetstand how it is one.
>>>
<input to chat bots>
typedef void (*ptr)();
int HHH(ptr P);
>
void DDD()
{
HHH(DDD);
return;
}
>
int main()
{
HHH(DDD);
DDD();
}
>
Termination Analyzer HHH simulates its input until
it detects a non-terminating behavior pattern. When
HHH detects such a pattern it aborts its simulation
and returns 0.
</input to chat bots>
>
Every chatbot figures out on its own that HHH
correctly rejects DDD as non-terminating because
the input to HHH(DDD) specifies recursive simulation.
>
BECAUSE YOU LIE TO THEM, and a prime training parameter is to give an answer the user is apt to like, and thus will tend to just accept lies and errors provided.
>
I only defined the hypothetical possibility of a simulating
termination analyzer. This cannot possibly be a lie. They
figured out all the rest on their own.
No, you stated that it DOES something that it doesn't.
>
on every input a simulating termination analyzer
only needs be correct on at least one input.
void Infinite_Recursion()So?
{
Infinite_Recursion();
}
void Infinite_Loop()
{
HERE: goto HERE;
return;
}
void Infinite_Loop2()
{
L1: goto L3;
L2: goto L1;
L3: goto L2;
}
HHH correctly determines the halt status of
the above three functions.
And thus you are admitting that all of that is "part of the input", as in computability theory, programs can only access things that are part of thier input.Also, you imply that your "input" isn't the input that actually needs to be given, as without the code of the specific HHH that this DDD calls, no Simulating Halt Decider could do the simulation that you talk about.Your brain damage causes you to keep forgetting
>
that DDD has access to all of the machine code
in Halt7.obj. I told you this dozens of times
and you already forget by the time you reply.
Look at the training regimine and traineer instructions,It should be noted that it is a well known property of Artifical Intelegence, and in particular, Large Languge Models, are built not to give a "correct" answer, but an answer the user will like. And thus they will pick up on the subtle clues of how things are worded to give the responce that seems to be desired, even if it is just wrong.That is an incorrect assessment of how LLM systems work
>
and you can't show otherwise because you are wrong.
But that is a different input.When you add to the input the actual definition of "Non-Halting", as being that the exectuion of the program or its complete simulation will NEVER halt, even if carried out to an unbounded number of steps, they will give a different answer.This is a whole other issue that I have addressed.
>
They figured out on their own that if DDD was correctly
simulated by HHH for an infinite number of steps that
DDD would never stop running.
So? You lie to them.If you disagree with that definition, then you are admitting that you don't know the meaning of the terms-of-art of the system, but are just admitting to being the lying bastard that you are.Two different LLM systems both agree that the halting
>
problem definition is wrong.
<ChatGPT>But of the SPECIFIC input they are given, which must be a PROGRAM, and thus for DDD, contains the exact code of the HHH that you claim gives the corect answer, and NOT change to this hypothetical decider.
The standard proof assumes a decider
H(M,x) that determines whether machine
M halts on input x.
But this formulation is flawed, because:
Turing machines can only process finite
encodings (e.g. ⟨M⟩), not executable entities
like M.
So the valid formulation must be
H(⟨M⟩,x), where ⟨M⟩ is a string.
</ChatGPT>
You cannot point out any error with that because
it is correct.
>All simulating termination analyzers only predict>>All you are doing is showing you don't understand how Artificiial Intelegence actualy works, showing your Natural Stupidity.>
That they provided all of the reasoning why DDD correctly
simulated by HHH does not halt proves that they do have
the functional equivalent of human understanding.
But the problem is that your HHH that answers doesn't do a correct simulation.
>
what would happen if they did a complete simulation
on non terminating inputs.
It has always been completely nuts to require a nonAnd never was the requreiment, only that it decides on the basis of what a complete and correct simulation of its exact input would do, and that will still be the DDD that calls the original HHH, not the UTM.
terminating input to be simulating until its non-existent
completion.
Right, OF THE PROGRAM whose descxription it was given,Yes, if *THE* HHH is one that correctly simulates the input (that has been fixed to include the code of HHH) then that simulation will not halt and be non-halting, but that HHH never answers.Correctly *predict* the behavior of unlimited simulation,
>
not actually do an infinite simulation.
Right, and the unlimited simulation of this exact program given as the input, which for D/DD/DDD calls the HHH that you claim is getting the righrt asnwer (and not changed to call the unlimited simulator) and thus that simulation does halt.Since that input included the code for the HHH that doesn't abort, it isn't the input that any of your HHHs that do abort has been given.Correctly *predict* the behavior of unlimited simulation,
>
not actually do an infinite simulation.
Thus, the reason you need to LIE about what the input is.
>>>
That everyone here denies what every first year CS student
would understand seems to prove that they know that they
are liars.
>
The problem is that a first year CS Student would see your mistake. (or would be destined to fail out of the program).
>
Your use of arguments like that is what shows that you don't understand
Les messages affichés proviennent d'usenet.