Re: The proper way to use LLMs to aid primary research into foundations

Liste des GroupesRevenir à c theory 
Sujet : Re: The proper way to use LLMs to aid primary research into foundations
De : polcott333 (at) *nospam* gmail.com (olcott)
Groupes : sci.logic sci.math comp.theory comp.ai.philosophy
Date : 10. Mar 2026, 18:41:41
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <10opl4n$emqs$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13
User-Agent : Mozilla Thunderbird
On 3/10/2026 10:38 AM, Ross Finlayson wrote:
On 03/10/2026 06:45 AM, olcott wrote:
On 3/10/2026 7:03 AM, Tristan Wibberley wrote:
On 09/03/2026 12:42, olcott wrote:
On 3/9/2026 4:45 AM, Tristan Wibberley wrote:
On 09/03/2026 07:46, Mikko wrote:
On 08/03/2026 15:12, olcott wrote:
>
It proven to be very useful.
>
Very useful tools can be very harmful if used carelessly. For example
a knife is useful when you cut bread or wood but harful if you happen
to cut your hand. Likewise an AI that can answer questions although
sometimes incorrectly is useful if you can filter out the incorrect
answers but may be harmful if you fail to filter out one incorrect
answer.
>
>
One should expect to fail in the long term. An LLM, naively
administered
to be more acceptable with more elapse of time, will fool you
eventually
by the criterion.
>
>
This is impossible when one only accepts answers
that are grounded in key quotes of foundational
peer reviewed papers in the field. When one does
this then the these quotes can be cited as the
basis ignoring everything that the LLM said.
>
That's naively true but is typically interpreted roughly the same as
"This is impossible when one only accepts answers that are grounded in
key quotes of foundational peer reviewed papers in the field and one is
not fooled wrt. what those quotes are at the time one makes one's
judgement."
>
The former judgement might not be possible at any time.
>
>
It has been dead obvious to me that the body of knowledge
expressed in language can be fully expressed as relations
between finite strings as a knowledge ontology acyclic
directed graph of knowledge semantic tautology for decades.
Now because of LLMs I have the conventional terms of the art
to explain all of the details of this within the various
aspects of proof theoretic semantics.
>
 That sort of approach after the "Berkeley school" of
attempting to eliminate either all constants or all
variables from the model of the theory, while making
My system does nothing like this. Understanding my perspective
requires understanding alternative various formal foundations of
semantics in linguistics. Few people well versed in the philosophical
underpinnings of the foundations of math would have much experience
with this. Very few of these would have any deep understanding of
alternative philosophical foundations.

for a quick sort of arithmetization then for computing,
has sort of eliminated itself from being "the body of
the body of knowledge", since you got "material implication"
there so it's broken.
 
"William T. Parry, Entailment Logics" redefines ¬ ∧ ∨ →
so that that conventional paradoxes do not arise.
For example A → (A ∨ B) is a logical fallacy.

It's fair to make for tableau for calculi the logical,
and even expedient or convenient, if it's not a _modal_
logic and a _relevance_ logic, then it's _quasi-modal_,
at best, and calling that complete is false, or wrong.
 
My system takes relevance logic to its maximum extreme
fully mapping every nuance of every sense meaning  of
every natural or formal language term to the complete
set of relations exhaustively defining the complete
semantic meaning of this term.

The key concepts of "monotonicity" and "entailment"
in what you have there as "see rule 1 + last wins"
or "proof by contradiction", is not "constructivist",
either. I.e., monotonicity and entailment are
violated by quasi-modal ir-relevance logic, which
makes for _abuse_ of language.
 It does make a great lie machine where that's stupid,
though, including claims of never being wrong.
It's still wrong though, or, "that ain't right".
 
--
Copyright 2026 Olcott<br><br>
My 28 year goal has been to make <br>
"true on the basis of meaning expressed in language"<br>
reliably computable for the entire body of knowledge.<br><br>
This required establishing a new foundation<br>

Date Sujet#  Auteur
5 Mar 26 * The proper way to use LLMs to aid primary research into foundations51olcott
5 Mar 26 +* Re: The proper way to use LLMs to aid primary research into foundations6Ross Finlayson
5 Mar 26 i+- Re: The proper way to use LLMs to aid primary research into foundations1Ross Finlayson
5 Mar 26 i`* Re: The proper way to use LLMs to aid primary research into foundations4Tristan Wibberley
6 Mar 26 i `* Re: The proper way to use LLMs to aid primary research into foundations3Ross Finlayson
7 Mar 26 i  `* Re: The proper way to use LLMs to aid primary research into foundations2Tristan Wibberley
7 Mar 26 i   `- Re: The proper way to use LLMs to aid primary research into foundations1Ross Finlayson
6 Mar 26 +* Re: The proper way to use LLMs to aid primary research into foundations35Mikko
6 Mar 26 i+* Re: The proper way to use LLMs to aid primary research into foundations31olcott
6 Mar 26 ii+* Re: The proper way to use LLMs to aid primary research into foundations3olcott
6 Mar 26 iii`* Re: The proper way to use LLMs to aid primary research into foundations2Ross Finlayson
6 Mar 26 iii `- Re: The proper way to use LLMs to aid primary research into foundations1olcott
7 Mar 26 ii+* Re: The proper way to use LLMs to aid primary research into foundations4Tristan Wibberley
7 Mar 26 iii+* Re: The proper way to use LLMs to aid primary research into foundations2olcott
7 Mar 26 iiii`- Re: The proper way to use LLMs to aid primary research into foundations1Ross Finlayson
7 Mar 26 iii`- Re: The proper way to use LLMs to aid primary research into foundations1Ross Finlayson
7 Mar 26 ii`* Re: The proper way to use LLMs to aid primary research into foundations23Mikko
7 Mar 26 ii +* Re: The proper way to use LLMs to aid primary research into foundations14olcott
7 Mar 26 ii i+* Re: The proper way to use LLMs to aid primary research into foundations2Tristan Wibberley
7 Mar 26 ii ii`- Re: The proper way to use LLMs to aid primary research into foundations1olcott
8 Mar 26 ii i`* Re: The proper way to use LLMs to aid primary research into foundations11Mikko
8 Mar 26 ii i `* Re: The proper way to use LLMs to aid primary research into foundations10olcott
9 Mar 26 ii i  `* Re: The proper way to use LLMs to aid primary research into foundations9Mikko
9 Mar 26 ii i   +* Re: The proper way to use LLMs to aid primary research into foundations2olcott
10 Mar 26 ii i   i`- Re: The proper way to use LLMs to aid primary research into foundations1Mikko
9 Mar 26 ii i   `* Re: The proper way to use LLMs to aid primary research into foundations6olcott
10 Mar 26 ii i    `* Re: The proper way to use LLMs to aid primary research into foundations5Tristan Wibberley
10 Mar 26 ii i     `* Re: The proper way to use LLMs to aid primary research into foundations4olcott
10 Mar 26 ii i      +* Re: The proper way to use LLMs to aid primary research into foundations2Ross Finlayson
20 Mar 26 ii i      i`- Re: The proper way to use LLMs to aid primary research into foundations1Ross Finlayson
10 Mar 26 ii i      `- Re: The proper way to use LLMs to aid primary research into foundations1olcott
7 Mar 26 ii `* Re: The proper way to use LLMs to aid primary research into foundations8Tristan Wibberley
7 Mar 26 ii  `* Re: The proper way to use LLMs to aid primary research into foundations7olcott
9 Mar 26 ii   `* Re: The proper way to use LLMs to aid primary research into foundations6Tristan Wibberley
9 Mar 26 ii    +* Re: The proper way to use LLMs to aid primary research into foundations2olcott
10 Mar 26 ii    i`- Re: The proper way to use LLMs to aid primary research into foundations1Tristan Wibberley
9 Mar 26 ii    +* Re: The proper way to use LLMs to aid primary research into foundations2Ross Finlayson
10 Mar 26 ii    i`- Re: The proper way to use LLMs to aid primary research into foundations1Ross Finlayson
10 Mar 26 ii    `- Re: The proper way to use LLMs to aid primary research into foundations1Tristan Wibberley
8 Mar 26 i`* Re: The proper way to use LLMs to aid primary research into foundations3olcott
8 Mar 26 i `* Re: The proper way to use LLMs to aid primary research into foundations2Ross Finlayson
8 Mar 26 i  `- Re: The proper way to use LLMs to aid primary research into foundations1olcott
7 Mar 26 `* Don Knuth on the superb capability of Claude Opus 4.69olcott
7 Mar 26  `* Re: Don Knuth on the superb capability of Claude Opus 4.68Ross Finlayson
7 Mar 26   `* Re: Don Knuth on the superb capability of Claude Opus 4.67Ross Finlayson
7 Mar 26    `* Terrence Tao: On Jan 4, ChatGPT was able to produce a proof Erdos #728 (Was: Don Knuth on the superb capability of Claude Opus 4.6)6Mild Shock
7 Mar 26     +* Re: Terrence Tao: On Jan 4, ChatGPT was able to produce a proof Erdos #728 (Was: Don Knuth on the superb capability of Claude Opus 4.6)4olcott
8 Mar 26     i`* Only, three decades in software engineering? (Re: Terrence Tao: On Jan 4, ChatGPT was able to produce a proof Erdos #728)3Mild Shock
8 Mar 26     i `* Re: Only, three decades in software engineering? (Re: Terrence Tao: On Jan 4, ChatGPT was able to produce a proof Erdos #728)2Mild Shock
8 Mar 26     i  `- Visual Perception helps reading Matrices, Fractions, etc.. (Re: Amazing Visual Perception, even for ASCII Input [GPT 5.3])1Mild Shock
9 Mar 26     `- Biobrain cannot digest "Artificial" in AI (Re: Aristotle: The Era of Vibe Proving is Here )1Mild Shock

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal