Liste des Groupes | Revenir à s logic |
On 3/26/2025 6:04 AM, Richard Damon wrote:Nope.On 3/25/25 10:55 PM, olcott wrote:When people talk about AI "understanding" the meaning ofOn 3/25/2025 8:47 PM, Richard Damon wrote:>On 3/25/25 9:28 PM, olcott wrote:>On 3/25/2025 8:00 PM, Richard Damon wrote:>On 3/25/25 10:32 AM, olcott wrote:>On 3/25/2025 5:03 AM, Mikko wrote:>On 2025-03-22 17:49:01 +0000, olcott said:>
>On 3/22/2025 11:38 AM, Mikko wrote:>On 2025-03-22 03:03:39 +0000, olcott said:>
>On 3/21/2025 9:31 PM, Richard Damon wrote:>On 3/21/25 9:24 PM, olcott wrote:>On 3/21/2025 7:50 PM, Richard Damon wrote:>On 3/21/25 8:40 PM, olcott wrote:>On 3/21/2025 6:49 PM, Richard Damon wrote:>On 3/21/25 8:43 AM, olcott wrote:>On 3/21/2025 3:41 AM, Mikko wrote:>On 2025-03-20 14:57:16 +0000, olcott said:>
>On 3/20/2025 6:00 AM, Richard Damon wrote:>On 3/19/25 10:42 PM, olcott wrote:>It is stipulated that analytic knowledge is limited to the>
set of knowledge that can be expressed using language or
derived by applying truth preserving operations to elements
of this set.
Which just means that you have stipulated yourself out of all classical logic, since Truth is different than Knowledge. In a good logic system, Knowledge will be a subset of Truth, but you have defined that in your system, Truth is a subset of Knowledge, so you have it backwards.
>
True(X) always returns TRUE for every element in the set
of general knowledge that can be expressed using language.
It never gets confused by paradoxes.
Not useful unless it returns TRUE for no X that contradicts anything
that can be inferred from the set of general knowledge.
>
I can't parse that.
> (a) Not useful unless
> (b) it returns TRUE for
> (c) no X that contradicts anything
> (d) that can be inferred from the set of general knowledge.
>
Because my system begins with basic facts and actual facts
can't contradict each other and no contradiction can be
formed by applying only truth preserving operations to these
basic facts there are no contradictions in the system.
>
>
No, you system doesn't because you don't actually understand what you are trying to define.
>
"Human Knowledge" is full of contradictions and incorrect statements.
>
Adittedly, most of them can be resolved by properly putting the statements into context, but the problem is that for some statement, the context isn't precisely known or the statement is known to be an approximation of unknown accuracy, so doesn't actually specify a "fact".
It is self evidence that for every element of the set of human
knowledge that can be expressed using language that undecidability
cannot possibly exist.
>
>
SO, you admit you don't know what it means to prove something.
>
When the proof is only syntactic then it isn't directly
connected to any meaning.
But Formal Logic proofs ARE just "syntactic"
>>>
When the body of human general knowledge has all of its
semantics encoded syntactically AKA Montague Grammar of
Semantics then a proof means validation of truth.
Yes, proof is a validatation of truth, but truth does not need to be able to be validated.
>
True(X) ONLY validates that X is true and does nothing else.
We can believe the "nothing else" part. The rest would require a proof.
>
True(X) is a predicate implementing a membership algorithm
for the body of general knowledge that can be expressed
using language.
>
Infinite proofs cannot be provided. Find a counter-example
where an element of the set of general knowledge that can
be expressed using language(GKEUL) would fool a True(X)
predicate into providing the wrong answer.
>
"This sentence is not true" cannot be derived by applying
truth preserving operations to basic facts thus is rejected
as not a member of (GKEUL).
What does your True(X) say when X means that there is no method to
determine whether a sentence of the first order group theory can
be proven.
>
That is either in the body of knowledge or not.
When something like deep learning eventually
causes it to have a deeper understanding than
humans it may prove that human understanding
of this is incorrect.
>
You just don't understand how "AI" works.
>
Current AI has ZERO understanding of what it is processing.
>
Work to try to make processing have understanding is running in the problem of complexity.
You are wrong again
https://www.technologyreview.com/2024/03/04/1089403/large-language- models-amazing-but-nobody-knows-why/
>
Doesn't say it understands what it is doing.
>
Note, "Arithmetic" is a purely symbolic operation, actually definable with a fairly small set of rules.
>
You are just again looking at summaries of ideas and think you know how they actually work.
>
It says that its abilities baffle its own designers.
So? That doesn't mean the machine understands what is does.
>
All you are doing is proving you don't understand the meaning of the words you use.
>
words within the insight of the Chinese Room thought
experiment we are only referring to duplicating the
functional result of human understanding. ChatGPT
certainly does this.
>>Sorry, but you are just proving your natural stupidity.>
>
Les messages affichés proviennent d'usenet.