Liste des Groupes | Revenir à s logic |
On 3/22/2025 9:53 PM, Richard Damon wrote:And the result talked about is NOT something that qualifies for the term Large Langauge Model.On 3/22/25 3:03 PM, olcott wrote:You are certainly correct as they currently stand.On 3/22/2025 12:40 PM, Richard Damon wrote:>On 3/22/25 1:36 PM, olcott wrote:>On 3/22/2025 11:04 AM, joes wrote:>Am Sat, 22 Mar 2025 10:13:12 -0500 schrieb olcott:>On 3/22/2025 5:11 AM, joes wrote:>Am Fri, 21 Mar 2025 22:03:39 -0500 schrieb olcott:On 3/21/2025 9:31 PM, Richard Damon wrote:On 3/21/25 9:24 PM, olcott wrote:On 3/21/2025 7:50 PM, Richard Damon wrote:On 3/21/25 8:40 PM, olcott wrote:On 3/21/2025 6:49 PM, Richard Damon wrote:On 3/21/25 8:43 AM, olcott wrote:On 3/21/2025 3:41 AM, Mikko wrote:On 2025-03-20 14:57:16 +0000, olcott said:On 3/20/2025 6:00 AM, Richard Damon wrote:On 3/19/25 10:42 PM, olcott wrote:>Because my system begins with basic facts and actual facts can't
contradict each other and no contradiction can be formed by
applying only truth preserving operations to these basic facts
there are no contradictions in the system.^The liar sentence is contradictory.
>>It is self evidence that for every element of the set of human
knowledge that can be expressed using language that undecidability
cannot possibly exist.^Not self-evident was Gödel's disproof of that.
>You must pay complete attention to ALL of my words or you get theNot if X is unknown (but still true).True(X) ONLY validates that X is true and does nothing else.When the body of human general knowledge has all of its semanticsYes, proof is a validatation of truth, but truth does not need to be
encoded syntactically AKA Montague Grammar of Semantics then a proof
means validation of truth.
able to be validated.
meaning that I specify incorrectly.Try explaining differently, then. What does your supposed truth predicate>
say about unknown truths?
>
The body of human general knowledge that can be expressed
using language contains zero unknown truths.
But from them, we can express unknown truths.
>
When we can express all known truths then we can
give LLM systems the basis to get on social media
and make all those asserting dangerous lies look
ridiculously foolish even to themselves.
LLMs are not "Truth Perseving" operations. PERIOD.
>
Getting from Generative AI to Trustworthy AI:
What LLMs might learn from Cyc
Doug Lenat, Gary Marcus
https://arxiv.org/abs/2308.04445
To the best of my recollection derived the above same
idea about the same time that Doug Lenat did.
It CHANGES, as we learn new things, thus a logic that calls True(x) false if x isn't know (even if actually true) is inconsistent.Actual knowledge itself has no inconsistencies by definition.>>
We also have real time fact checking for politicians.
Not only will these systems be able to reject false
statements they will be able to instantly prove how
they know they are false.
>
Nope, As pointed out the sum of all "Human Knowledge" is not a truth based logic system, but is full of inconsistencies.
>
And Tarski shows that there exist statements that it can not validate.Your first problem would be getting the people you are trying to "fact check" to admit that you initial knowledge base was correct, as most of it was actually based on opinions. Yes, what is the generally accepted beleifs, but the people you are trying to persuade, don't accept those beliefs, so won't believe your results.We begin with the hypothetical body of all general knowledge
>
that is expressed using language and try to find any element
that could not be validated with a True(X).
Tarski's x, which he proved to be a valid statement, and neither result from True(x) is consistant.Your problem is you just don't understand the nature of the problem, because your thinking is just too stupid and immature.If that was true you could find a counter-example.
Because you know that is not true ad hominem is all that you have.
Les messages affichés proviennent d'usenet.