Neuro infused logic programming [NILP] (Was: Auto-Encoders as Prolog Fact Stores)

Liste des GroupesRevenir à cl prolog 
Sujet : Neuro infused logic programming [NILP] (Was: Auto-Encoders as Prolog Fact Stores)
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : comp.lang.prolog
Date : 19. Mar 2025, 20:58:47
Autres entêtes
Message-ID : <vrf7ll$4qs4$1@solani.org>
References : 1 2
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20
Hi,
I first wanted to use a working title:
"new frontiers in logic programming"
But upon reflection and because of fElon,
here another idea for a working title:
"neuro infused logic programming" (NILP)
What could it mean? Or does it have some
alternative phrasing already?
Try this paper:
Compositional Neural Logic Programming
Son N. Tran - 2021
The combination of connectionist models for low-level
information processing and logic programs for high-level
decision making can offer improvements in inference
efficiency and prediction performance
https://www.ijcai.org/proceedings/2021/421
Browsing through the bibliography I find:
[Cohen et al., 2017]
Tensorlog: Deep learning meets probabilistic
[Donadello et al., 2017]
Logic tensor networks
[Larochelle and Murray, 2011]
The neural autoregressive distribution estimator
[Manhaeve et al., 2018]
Neural probabilistic logic programming
[Mirza and Osindero, 2014]
Conditional generative adversarial nets
[Odena et al., 2017]
auxiliary classifier GANs
[Pierrot et al., 2019]
compositional neural programs
[Reed and de Freitas, 2016]
Neural programmer-interpreters
[Riveret et al., 2020]
Neuro-Symbolic Probabilistic Argumentation Machines
[Serafini and d’Avila Garcez, 2016]
logic tensor networks.
[Socher et al., 2013]
neural tensor networks
[Towell and Shavlik, 1994]
Knowledge-based artificial neural networks
[Tran and d’Avila Garcez, 2018]
Deep logic networks
[Wang et al., 2019]
compositional neural information fusion
Mild Shock schrieb:
Hi,
 One idea I had was that autoencoders would
become kind of invisible, and work under the hood
to compress Prolog facts. Take these facts:
 % standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
data(seg7, [1,1,1,1,1,1,0], [1,1,1,1,1,1,0]).
data(seg7, [0,1,1,0,0,0,0], [0,1,1,0,0,0,0]).
data(seg7, [1,1,0,1,1,0,1], [1,1,0,1,1,0,1]).
data(seg7, [1,1,1,1,0,0,1], [1,1,1,1,0,0,1]).
data(seg7, [0,1,1,0,0,1,1], [0,1,1,0,0,1,1]).
data(seg7, [1,0,1,1,0,1,1], [1,0,1,1,0,1,1]).
data(seg7, [1,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
data(seg7, [1,1,1,0,0,0,0], [1,1,1,0,0,0,0]).
data(seg7, [1,1,1,1,1,1,1], [1,1,1,1,1,1,1]).
data(seg7, [1,1,1,1,0,1,1], [1,1,1,1,0,1,1]).
% alternatives 9, 7, 6, 1
data(seg7, [1,1,1,0,0,1,1], [1,1,1,1,0,1,1]).
data(seg7, [1,1,1,0,0,1,0], [1,1,1,0,0,0,0]).
data(seg7, [0,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
data(seg7, [0,0,0,0,1,1,0], [0,1,1,0,0,0,0]).
https://en.wikipedia.org/wiki/Seven-segment_display
 Or more visually, 9 7 6 1 have variants trained:
 :- show.
_0123456789(9)(7)(6)(1)
 The auto encoder would create a latent space, an
encoder, and a decoder. And we could basically query
?- data(seg7, X, Y) with X input, and Y output,
 9 7 6 1 were corrected:
 :- random2.
0, 0
_01234567899761
 The autoencoder might also tolerate errors in the
input that are not in the data, giving it some inferential
capability. And then choose an output again not in
 the data, giving it some generative capabilities.
 Bye
 See also:
 What is Latent Space in Deep Learning?
https://www.geeksforgeeks.org/what-is-latent-space-in-deep-learning/
 Mild Shock schrieb:
>
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
>
The paper contains not a single reference to autoencoders!
Still they show this example:
>
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
>
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
>
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
>
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
>
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
>
 

Date Sujet#  Auteur
22 Feb 25 * Prolog totally missed the AI Boom8Mild Shock
22 Feb 25 +* Auto-Encoders as Prolog Fact Stores (Was: Prolog totally missed the AI Boom)3Mild Shock
23 Feb 25 i+- Ignorance in ILP circles confirmed (Was: Auto-Encoders as Prolog Fact Stores)1Mild Shock
19 Mar 25 i`- Neuro infused logic programming [NILP] (Was: Auto-Encoders as Prolog Fact Stores)1Mild Shock
7 Mar 25 +- Last Exit Analogical Resoning (Was: Prolog totally missed the AI Boom)1Mild Shock
25 Mar 25 `* A software engineering analyis why Prolog fails (Was: Prolog totally missed the AI Boom)3Mild Shock
27 Mar 25  `* Lets re-iterate software engineering first! (Was: A software engineering analyis why Prolog fails)2Mild Shock
27 Mar 25   `- Re: Lets re-iterate software engineering first! (Was: A software engineering analyis why Prolog fails)1Mild Shock

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal