Sujet : Last Exit Analogical Resoning (Was: Prolog totally missed the AI Boom)
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : comp.lang.prologDate : 07. Mar 2025, 18:16:25
Autres entêtes
Message-ID : <vqf9l6$16euf$1@solani.org>
References : 1
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20
The problem I am trying to address was
already adressed here:
ILP and Reasoning by Analogy
Intuitively, the idea is to use what is already
known to explain new observations that appear similar
to old knowledge. In a sense, it is opposite of induction,
where to explain the observations one comes up with
new hypotheses/theories.
Vesna Poprcova et al. - 2010
https://www.researchgate.net/publication/220141214The problem consists in that ILP doesn’t try to
learn and apply analogies , whereas autoencoders and
transformers typically try to “Grok” analogies, so that
with a fewer training they can perform
well in certain domains. They will do some inferencing
on the part of the encoders also for unseen input
data. And they will do some generation on the part of
the decoder also for unseen
latent space configurations from unseen input data.
By unseen data I mean data not in the training set.
The full context window may tune the inferencing and
generation, which appeals to:
Analogy as a Search Procedure
Rumelhart and Abrahamson showed that when presented
with analogy problems like mokey:pig:gorilla:X, with
rabbit, tiger, cow, and elephant as alternatives for X,
subjects rank the four options following the
parallelogram rule.
Matías Osta-Vélez - 2022
https://www.researchgate.net/publication/363700634There are learning methods that work similarly
like ILP, in that they are based on positive and
negative samples. And the statistics can involve
bilinear forms, similar like
is seen in the “Attention is all you Need” paper.
But I have not yet a good implementation of this
evisioned marriage of autoencoders and ILP, and
I am still researching the topic.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg