John Sowa is close with RNT (Was: Traditions die: Another one bites the Dust)

Liste des GroupesRevenir à s logic 
Sujet : John Sowa is close with RNT (Was: Traditions die: Another one bites the Dust)
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : sci.logic
Date : 10. Jan 2025, 22:42:36
Autres entêtes
Message-ID : <vls48b$2feqe$1@solani.org>
References : 1
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20
Hi,
Interestingly, John Sowa is close with RNT to how
ChatGPT works . In his LLMs bashing videos, John Sowa
repeatedly showed brain models in his slides that
come from Sydney Lamb:
Relational Network Theory (RNT), also known as
Neurocognitive Linguistics (NCL) and formerly as
Stratificational Linguistics or Cognitive-Stratificational
Linguistics, is a connectionist theoretical framework in
linguistics primarily developed by Sydney Lamb which
aims to integrate theoretical linguistics with neuroanatomy.
https://en.wikipedia.org/wiki/Relational_Network_Theory
You can ask ChatGPT and ChatGPT will tell you
what parallels it sees between LLM and RNT.
Bye
P.S.: Here is what ChatGPT tells me about LLM and RNT
as a summary, but the answer itself was much bigger:
While there are shared aspects, particularly the
emphasis on relational dynamics, ChatGPT models are
not explicitly designed with RNT principles. Instead,
they indirectly align with RNT through their ability
to encode and use relationships learned from data.
However, GPT’s probabilistic approach and reliance
on large-scale data contrast with the more structured
and theory-driven nature of RNT.
https://chatgpt.com/share/67818fb7-a788-8013-9cfe-93b3972c8114
I would also put the answer into perspective if one
include RAG. So with Retrieval Augmented Generation
things look completely different again.
Mild Shock schrieb:
Hi,
  > Subject: An Isabelle Foundation?
 > Date: Fri, 10 Jan 2025 14:16:33 +0000
 > From: Lawrence Paulson via isabelle-dev
 > Some of us have been talking about how to keep
things going after the recent retirement of Tobias and
myself and the withdrawal of resources from Munich.
I've even heard a suggestion that Isabelle would not
be able to survive for much longer.
https://en.wikipedia.org/wiki/Isabelle_%28proof_assistant%29
 No more money for symbolic AI? LoL
 Maybe suplanted by Lean (proof assistant) and my
speculation maybe re-orientation to keep up with
Hybrid methods, such as found in ChatGPT.
https://en.wikipedia.org/wiki/Lean_%28proof_assistant%29
 Bye

Date Sujet#  Auteur
10 Jan 25 * Traditions die: Another one bites the Dust5Mild Shock
10 Jan 25 `* John Sowa is close with RNT (Was: Traditions die: Another one bites the Dust)4Mild Shock
10 Jan 25  `* Or ChatGPT is better in Program Verification (Re: John Sowa is close with RNT)3Mild Shock
11 Jan 25   `* What is Argonne doing now? (Was: Or ChatGPT is better in Program Verification)2Mild Shock
11 Jan 25    `- Lemanicus at EPFL Computer Museum [compare to NVIDIA H100 NVL]1Mild Shock

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal