Ignorance in ILP circles confirmed (Was: Auto-Encoders as Prolog Fact Stores)

Liste des GroupesRevenir à cl prolog 
Sujet : Ignorance in ILP circles confirmed (Was: Auto-Encoders as Prolog Fact Stores)
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : comp.lang.prolog
Date : 23. Feb 2025, 18:33:51
Autres entêtes
Message-ID : <vpfm5u$ld6s$1@solani.org>
References : 1 2
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20
Hi,
Somebody wrote:
 > It’s a self-supervised form of ILP.
 > No autoencoders anywhere at all.
And, this only proofs my point that ILP doesn’t
solve the problem to make autoencoders and transformers
available directly in Prolog. Which was the issue I posted
at the top of this thread.
Subsequently I would not look into ILP for Prolog
autoencoders and transformers is my point exactly. Because
mostlikely ILP is unaware of the concept of latent space.
Latent space has quite some advantages:
- *Dimensionality Reduction:* It captures the essential
   structure of high-dimensional data in a more
   compact form.
- *Synthetic Data:* Instead of modifying raw data, you can
   use the latent space, to generate variations for
   further learning.
- *Domain Adaptation:* Well-structured latent space can
   help transfer knowledge from abundant domains to
   underrepresented ones.
If you don’t mention autoencoders and transformers at
all, you are possibly also not aware of the above advantages
and other properties of autoencoders and transformers.
In ILP mostlikely the concept of latent space is dormant
or blurred, since the stance is well we invent predicates,
ergo relations. There is no attempt to break
down relations further:
https://www.v7labs.com/blog/autoencoders-guide
Basically autoencoders and transformers, by imposing some
hidden layer, are further structuring relations into an
encoder and a decoder. So a relation is seen as a join.
The H is the bottleneck on purpose:
relation(X, Y) :- encoder(X, H), decoder(H, Y).
The values of H go through the latent space which is
invented during the learning process. It is not simply
the input or output space.
This design has some very interesting repercussions.
Bye
Mild Shock schrieb:
Hi,
 One idea I had was that autoencoders would
become kind of invisible, and work under the hood
to compress Prolog facts. Take these facts:
 % standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
data(seg7, [1,1,1,1,1,1,0], [1,1,1,1,1,1,0]).
data(seg7, [0,1,1,0,0,0,0], [0,1,1,0,0,0,0]).
data(seg7, [1,1,0,1,1,0,1], [1,1,0,1,1,0,1]).
data(seg7, [1,1,1,1,0,0,1], [1,1,1,1,0,0,1]).
data(seg7, [0,1,1,0,0,1,1], [0,1,1,0,0,1,1]).
data(seg7, [1,0,1,1,0,1,1], [1,0,1,1,0,1,1]).
data(seg7, [1,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
data(seg7, [1,1,1,0,0,0,0], [1,1,1,0,0,0,0]).
data(seg7, [1,1,1,1,1,1,1], [1,1,1,1,1,1,1]).
data(seg7, [1,1,1,1,0,1,1], [1,1,1,1,0,1,1]).
% alternatives 9, 7, 6, 1
data(seg7, [1,1,1,0,0,1,1], [1,1,1,1,0,1,1]).
data(seg7, [1,1,1,0,0,1,0], [1,1,1,0,0,0,0]).
data(seg7, [0,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
data(seg7, [0,0,0,0,1,1,0], [0,1,1,0,0,0,0]).
https://en.wikipedia.org/wiki/Seven-segment_display
 Or more visually, 9 7 6 1 have variants trained:
 :- show.
_0123456789(9)(7)(6)(1)
 The auto encoder would create a latent space, an
encoder, and a decoder. And we could basically query
?- data(seg7, X, Y) with X input, and Y output,
 9 7 6 1 were corrected:
 :- random2.
0, 0
_01234567899761
 The autoencoder might also tolerate errors in the
input that are not in the data, giving it some inferential
capability. And then choose an output again not in
 the data, giving it some generative capabilities.
 Bye
 See also:
 What is Latent Space in Deep Learning?
https://www.geeksforgeeks.org/what-is-latent-space-in-deep-learning/
 Mild Shock schrieb:
>
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
>
The paper contains not a single reference to autoencoders!
Still they show this example:
>
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
>
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
>
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
>
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
>
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
>
 

Date Sujet#  Auteur
22 Feb 25 * Prolog totally missed the AI Boom8Mild Shock
22 Feb 25 +* Auto-Encoders as Prolog Fact Stores (Was: Prolog totally missed the AI Boom)3Mild Shock
23 Feb 25 i+- Ignorance in ILP circles confirmed (Was: Auto-Encoders as Prolog Fact Stores)1Mild Shock
19 Mar 25 i`- Neuro infused logic programming [NILP] (Was: Auto-Encoders as Prolog Fact Stores)1Mild Shock
7 Mar 25 +- Last Exit Analogical Resoning (Was: Prolog totally missed the AI Boom)1Mild Shock
25 Mar 25 `* A software engineering analyis why Prolog fails (Was: Prolog totally missed the AI Boom)3Mild Shock
27 Mar 25  `* Lets re-iterate software engineering first! (Was: A software engineering analyis why Prolog fails)2Mild Shock
27 Mar 25   `- Re: Lets re-iterate software engineering first! (Was: A software engineering analyis why Prolog fails)1Mild Shock

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal