Liste des Groupes | Revenir à theory |
On 10/23/24 9:51 PM, olcott wrote:It is a brilliant genius that seems to infallibly deduce allOn 10/23/2024 6:16 PM, Richard Damon wrote:But, it is just a stupid idiot that has been taught to repeat what it has been told.On 10/23/24 8:33 AM, olcott wrote:>On 10/23/2024 6:12 AM, Richard Damon wrote:>On 10/23/24 12:04 AM, olcott wrote:>On 10/22/2024 10:47 PM, Richard Damon wrote:>On 10/22/24 11:25 PM, olcott wrote:>On 10/22/2024 10:02 PM, Richard Damon wrote:>On 10/22/24 11:57 AM, olcott wrote:>On 10/22/2024 10:18 AM, joes wrote:>Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott:>On 10/22/2024 4:50 AM, joes wrote:>Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:On 10/21/2024 9:42 PM, Richard Damon wrote:On 10/21/24 7:08 PM, olcott wrote:On 10/21/2024 6:05 PM, Richard Damon wrote:On 10/21/24 6:48 PM, olcott wrote:On 10/21/2024 5:34 PM, Richard Damon wrote:On 10/21/24 12:29 PM, olcott wrote:On 10/21/2024 10:17 AM, joes wrote:Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb olcott:On 10/21/2024 3:39 AM, joes wrote:>It's not like it will deterministically regenerate the same output.Did ChatGPT generate that?
If it did then I need *ALL the input that caused it to generate
that*>"naw, I wasn't lied to, they said they were saying the truth" sureI asked it if what it was told was a lie and it explained how whatNo, it said that given what you told it (which was a lie)No, someone using some REAL INTELEGENCE, as opposed to a programI specifically asked it to verify that its key assumption is
using "artificial intelegence" that had been loaded with false
premises and other lies.
correct and it did.
it was told is correct.
buddy.
>HAHAHAHAHA there isn't anything about truth in there, prove me wrongBecause Chat GPT doesn't care about lying.ChatGPT computes the truth and you can't actually show otherwise.Just no. Do you believe that I didn't write this myself after all?That seems to indicate that you are admitting that you cheated when youI definitely typed something out in the style of an LLM instead of myBecause what you are asking for is nonsense.I believe that the "output" Joes provided was fake on the basis that
Of course an AI that has been programmed with lies might repeat the
lies.
When it is told the actual definition, after being told your lies,
and asked if your conclusion could be right, it said No.
Thus, it seems by your logic, you have to admit defeat, as the AI,
after being told your lies, still was able to come up with the
correct answer, that DDD will halt, and that HHH is just incorrect to
say it doesn't.
she did not provide the input to derive that output and did not use
the required basis that was on the link.
own words /s
>Accepting your premises makes the problem uninteresting.If you want me to pay more attention to what you say, you first needYou cannot show that my premises are actually false.
to return the favor, and at least TRY to find an error in what I say,
and be based on more than just that you think that can't be right.
But you can't do that, as you don't actually know any facts about the
field that you can point to qualified references.
To show that they are false would at least require showing that they
contradict each other.
discussed this with ChatGPT. You gave it a faulty basis and then argued
against that.
>They also conventional within the context of software engineering. Thatlol
software engineering conventions seem incompatible with computer science
conventions may refute the latter.
>The a halt decider must report on the behavior that itself is containedJust because you don't like the undecidability of the halting problem?
within seems to be an incorrect convention.
>u32 HHH1(ptr P) // line 721That makes no sense. DDD halts or doesn't either way. HHH and HHH1 may
u32 HHH(ptr P) // line 801
The above two functions have identical C code except for their name.
>
The input to HHH1(DDD) halts. The input to HHH(DDD) does not halt. This
conclusively proves that the pathological relationship between DDD and
HHH makes a difference in the behavior of DDD.
give different answers, but then exactly one of them must be wrong.
Do they both call HHH? How does their execution differ?
>
void DDD()
{
HHH(DDD);
return;
}
>
*It is a verified fact that*
>
(a) Both HHH1 and HHH emulate DDD according to the
semantics of the x86 language.
But HHH only does so INCOMPLETELY.
>>>
(b) HHH and HHH1 have verbatim identical c source
code, except for their differing names.
So? the fact the give different results just proves that they must have a "hidden input" thta gives them that different behavior, so they can't be actually deciders.
>
HHH1 either references itself with the name HHH1, instead of the name HHH, so has DIFFERENT source code, or your code uses assembly to extract the address that it is running at, making that address a "hidden input" to the code.
>
So, you just proved that you never meet your basic requirements, and everything is just a lie.
>>>
(c) DDD emulated by HHH has different behavior than
DDD emulated by HHH1.
No, just less of it because HHH aborts its emulation.
>
Aborted emulation doesn't provide final behavior.
>>>
(d) Each DDD *correctly_emulated_by* any HHH that
this DDD calls cannot possibly return no matter
what this HHH does.
>
No, it can not be emulated by that HHH to that point, but that doesn't mean that the behavior of program DDD doesn't get there.
>
Halt Deciding / Termination Analysis is about the behavior of the program described, and thus all you are showing is that you aren't working on either of those problems, but have just been lying.
>
>
Note, your argument is using a equivocation on the term "correctly emulated" as you are trying to claim a correct emulation by just a partial emulation, but also trying to claim a result that only comes from COMPLETE emulation, that of determining final behavior.
>
This again, just prove that you whole proof is based on lies.
I didn't hardly glance at any of that.
*This verified fact is a key element of my point*
>
When HHH1(DDD) emulates DDD this DDD reaches its final state.
When HHH(DDD) emulates DDD this DDD cannot possibly reach its
final state.
>
But HHH aborts its emulation, and to that point saw EXACTLY the same sequence of steps that HHH1 saw (or you have lied about them being identical and pure funcitons).
>
*That double talk dodges the point that I made*
>
What "Double talk"?
>
Your whole logic is just double talk.
>
You confuse your made up fanstay for reality and lock yourself into your insanity.
>DDD emulated by HHH cannot possibly reach>
its final state no matter WTF that HHH does.
There is your Equivocation again!
>
"Reaching Final State" is a property of the execution of complete emulation of a program.
>
So, since when we look at that for a DDD that calls an HHH that returns an answer, we find it reaches such a final state, your claim is just a blantant lie. Not just an honest mistake, as you have been told repeatedly the answer, but in your total stupidity reject the truth to keep your lies.
>
DDD emulated by HHH according to the semantics of the
x86 language cannot possibly reach its own "return"
instruction matter WTF that HHH does.
Then your logic is just inconsistant as HHH can not be folling the semantic of the x86 language and then do "WTF".
>
We have already been through this too many times.
I just found out that ChatGPT also has ADD. When
you hit 4000 words of input and output it starts
forgetting things. Maybe you are this same way?
>
It is freaking amazing that when you stay within
this 4000 word limit its reasoning is superb.
>
You seem to be having a hard time understanding the
above 24 words.
>
You can't seem to understand a correct emulation of
zero to infinity steps by each element of an an infinite
set of HHH emulators results in zero instances of DDD
reaching its own "return" instruction.
>
ChatGPT does completely understand this.
>
It seems you are nothing but a stupid idiot that believe what you have told yourselfg.If this was true then someone would have been able to find
All you are doing with all this talk about Chat GPT agreeing with you is proving that you know you arguement is so bad, the only thing with any form of intelgence that will believe you is a program with only artificial intelegence.ChatGPT does seem to infallibly understand every nuance of the consequences that follow from my premises. No one can show otherwise.
Sorry, yolu are just proving how stupid your ideas are.
Les messages affichés proviennent d'usenet.