Sujet : A software engineering analyis why Prolog fails (Was: Prolog totally missed the AI Boom)
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : comp.lang.prologDate : 25. Mar 2025, 12:22:53
Autres entêtes
Message-ID : <vru3mc$c4jo$1@solani.org>
References : 1
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20
Hi,
A software engineering analyis why Prolog fails
================================================
You would also get more done, if Prolog had some
well design plug and play machine learning libraries.
Currently most SWI Prolog packages are just GitHub dumps:
(Python) Problem ---> import solver ---> Solution
(SWI) Problem ---> install pack ---> Problem
Python shows more success in the practitioners domain,
since it has more libraries that have made the test of
time of practial use. Whereas Prolog is still in its
infancy in many domains,
you don’t arrive at the same level of convenience and
breadth as Python, if you have only fire and forget dumps
offered, from some PhD projects where software engineering
is secondary.
I don’t know exactly why Prolog has so much problems
with software engineering. Python has object orientation,
but Logtalk didn’t make the situation better. SWI-Prolog
has modules, but they are never used. For example this
here is a big monolith:
This module performs learning over Logic Programs
https://github.com/friguzzi/liftcover/blob/main/prolog/liftcover.plIts more designed towards providing some command line
control. But if you look into it, it has EM algorithms
and gradient algorithm, and who knows what. These building
blocks are not exposed,
not made towards reused or towards improvement by
switching in 3rd party alternatives. Mostlikely a design
flaw inside the pack mechanism itself, since it assumes a
single main module?
So the pack mechanism works, if a unit pack imports a
clp(BNR) pack, since it uses the single entry of clp(BNR).
But it is never on paar with the richness of Python packages,
which have more a hierarchical structure of many
many modules in their packs.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg