Re: Lets re-iterate software engineering first! (Was: A software engineering analyis why Prolog fails)

Liste des GroupesRevenir à cl prolog 
Sujet : Re: Lets re-iterate software engineering first! (Was: A software engineering analyis why Prolog fails)
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : comp.lang.prolog
Date : 27. Mar 2025, 11:43:58
Autres entêtes
Message-ID : <vs3a5d$eecp$2@solani.org>
References : 1 2 3
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20
But even with such a directive there are many
challenges, which ProbLog suffers also from. Consider
this transformer pipeline, with two components of
type g twice:
      +----+   +----+   +----+
      |    |-->| g  |-->|    |
      |    |   +----+   |    |
x -->| f  |            | h  |--> y
      |    |   +----+   |    |
      |    |-->| g  |-->|    |
      +----+   +----+   +----+
With common subexpessions, i.e. computing
f only once, I can write the forward pass
as follows:
p, q = f(x)
y = h(g(p), g(q))
But the above doesn’t show the learnt parameters.
Will g and g be siamese neural networks, learning
one sets of parameters, or will they learn
two sets of parameters? See also:
Siamese neural network
https://en.wikipedia.org/wiki/Siamese_neural_network
If I am not mistaken in ProbLog one can use
variables to indicate probabilities annotation.
An example of such a variable is seen here:
% intensional probabilistic fact with flexible probability:
P::pack(Item) :- weight(Item,Weight),  P is 1.0/Weight.
But one might need something either to create
siamese or to separate siamese, depending on what
the default modus operandi of the probabilistic
logic programming language is.
Mild Shock schrieb:
I have retracted those posts, that had Python-first
in it, not sure whether my analysis about some projects
was water thight. I only made the Python example as to
illustrate the idea of
 a variation point. I do not think programming language
trench wars are good idea, and one should put software
engineering -first, as an abstract computer science
discipline. Not doing so
 is only a distraction from the real issues at hand.
Variation points where defined quite vaguely
on purpose:
  > Ivar Jacobson defines a variation point as follows:
 > A variation point identifies one or more locations at
 > which the variation will occur.
 Variation points can come in many shades, and for
example ProbLog based approaches take the viewpoint
of a Prolog text with a lot of configuration flags
and predicate
 annotations. This is quite different from the
autoencoder or transformer component approach I
suggested here. In particular component oriented
approach could be
 more flexible and dynamic, when they allow programmatic
configuration of components. The drawback is you cannot
understand what the program does by looking at a
 simply structured Prolog text. Although I expected
the situation is not that bad, and one could do
something similar to a table/1 directive, i.e. some
directive that says
 look, this predicate is an autoencoder or transformer:
  > One idea I had was that autoencoders would become
 > kind of invisible, and work under the hood to compress
 > Prolog facts. Take these facts:
 >
 > % standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
 > data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
 So to instruct the Prolog system to do what is sketched,
one would possibly need a new directive autoencoder/1:
 :- autoencoder data/3.
 Mild Shock schrieb:
Hi,
>
A software engineering analyis why Prolog fails
================================================
>
You would also get more done, if Prolog had some
well design plug and play machine learning libraries.
Currently most SWI Prolog packages are just GitHub dumps:
>
(Python) Problem ---> import solver ---> Solution
>
(SWI) Problem ---> install pack ---> Problem
>
Python shows more success in the practitioners domain,
since it has more libraries that have made the test of
time of practial use. Whereas Prolog is still in its
infancy in many domains,
>
you don’t arrive at the same level of convenience and
breadth as Python, if you have only fire and forget dumps
offered, from some PhD projects where software engineering
is secondary.
>
I don’t know exactly why Prolog has so much problems
with software engineering. Python has object orientation,
but Logtalk didn’t make the situation better. SWI-Prolog
has modules, but they are never used. For example this
>
here is a big monolith:
>
This module performs learning over Logic Programs
https://github.com/friguzzi/liftcover/blob/main/prolog/liftcover.pl
>
Its more designed towards providing some command line
control. But if you look into it, it has EM algorithms
and gradient algorithm, and who knows what. These building
blocks are not exposed,
>
not made towards reused or towards improvement by
switching in 3rd party alternatives. Mostlikely a design
flaw inside the pack mechanism itself, since it assumes a
single main module?
>
So the pack mechanism works, if a unit pack imports a
clp(BNR) pack, since it uses the single entry of clp(BNR).
But it is never on paar with the richness of Python packages,
which have more a hierarchical structure of many
>
many modules in their packs.
>
Mild Shock schrieb:
>
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
>
The paper contains not a single reference to autoencoders!
Still they show this example:
>
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
>
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
>
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
>
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
>
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
>
>
 

Date Sujet#  Auteur
22 Feb 25 * Prolog totally missed the AI Boom8Mild Shock
22 Feb 25 +* Auto-Encoders as Prolog Fact Stores (Was: Prolog totally missed the AI Boom)3Mild Shock
23 Feb 25 i+- Ignorance in ILP circles confirmed (Was: Auto-Encoders as Prolog Fact Stores)1Mild Shock
19 Mar 25 i`- Neuro infused logic programming [NILP] (Was: Auto-Encoders as Prolog Fact Stores)1Mild Shock
7 Mar 25 +- Last Exit Analogical Resoning (Was: Prolog totally missed the AI Boom)1Mild Shock
25 Mar 25 `* A software engineering analyis why Prolog fails (Was: Prolog totally missed the AI Boom)3Mild Shock
27 Mar 25  `* Lets re-iterate software engineering first! (Was: A software engineering analyis why Prolog fails)2Mild Shock
27 Mar 25   `- Re: Lets re-iterate software engineering first! (Was: A software engineering analyis why Prolog fails)1Mild Shock

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal