Sujet : Re: LLM versus CYC (Re: The Emperor’s New Clothes [John Sowa])
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : comp.lang.prologDate : 05. Jan 2025, 21:45:52
Autres entêtes
Message-ID : <vler1s$28i58$5@solani.org>
References : 1 2 3 4 5
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0 SeaMonkey/2.53.19
Notice John Sowa calls LLM the “store”
of GPT. This could be a misconception that
matches what Permion did for their cognitive memory.
But matters are a little bit more complicated
to say the least, especially
since OpenAI insists that GPT itself is also
an LLM. What might highlight the situation is
Fig 6 of this paper, postulating two Mixture of
Experts (MoE), one on attention mechanism and
one on feed-forward:
A Survey on Mixture of Experts
[2407.06204] A Survey on Mixture of Experts
https://arxiv.org/abs/2407.06204Disclaimer: Pitty Marvin Minksy didn’t describe
these things already in his society of mind!
Would make it easier to understand it now…
Mild Shock schrieb:
Douglas Lenat died two years ago in
August 31, 2023. I don’t know whether
CYC and Cycorp will make a dent in
the future. CYC adressed the common
knowledge bottleneck, and so do LLM. I
am using CYC mainly as a historical reference.
The “common knowledge bottleneck” in AI is
a challenge that plagued early AI systems.
This bottleneck stems from the difficulty
of encoding vast amounts of everyday,
implicit human knowledge things we take for
granted but computers historically struggled
to understand. Currently LLM by design focus
more on shallow
knowledge, whereas systems such as CYC might
exhibit more deep knowlege in certain domains,
making them possibly more suitable when the
stakeholders expect more reliable
analytic capabilities.
The problem is not explainability,
the problem is intelligence.