Liste des Groupes | Revenir à c theory |
Am Thu, 03 Oct 2024 16:01:51 -0500 schrieb olcott:https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21cOn 10/3/2024 7:15 AM, Mikko wrote:On 2024-10-03 02:09:39 +0000, olcott said:On 10/2/2024 5:48 PM, Richard Damon wrote:On 10/2/24 10:39 AM, olcott wrote:On 10/2/2024 6:08 AM, Richard Damon wrote:On 10/1/24 7:26 PM, olcott wrote:On 10/1/2024 12:58 PM, joes wrote:Am Tue, 01 Oct 2024 12:31:41 -0500 schrieb olcott:On 10/1/2024 8:09 AM, joes wrote:Am Tue, 01 Oct 2024 07:39:18 -0500 schrieb olcott:On 10/1/2024 7:19 AM, olcott wrote:https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21cIt sounds like it’s trained on your spam. LLMs don’t know
anything anyway.*You can continue this conversation with ChatGPT*I’ll leave that to you.Naturally, you need an outside proof for that.In the case of their evaluation of my work they are correct.No, it will tell you something that matches the words you told it.You feed it your objections.Click on the link and see how it answers this question:You should feed it our objections.
Is H a Halt Decider for D?
If you believe in it only when you prompt it, it is not suited as
an authority (fallacious anyway).
It will tell you how and why you are wrong.
You don't seem to understand what Large Language Models are.
You seem to forget that LLM know nothing of the "truth", only what
matches their training data.
They are know to be liars, just like you.
Les messages affichés proviennent d'usenet.