Prologers are still on the path of Don Quixote:
> extremely restrictive setting and the only reason
> it’s worked so well over the years is that people
> have persisted at flogging it like the deadest
> of dead horses
For some its a dead horse, for others by means of the
two nobel prices, one for Geoffrey Hinton in Physics and
one for Demis Hassabis in Chemistry, both in 2024,
its rather a wakeup call.
The current state of affaire in Prolog is , autoencoders
and transformers are not available via ILP, it lacks the
conceptual setting, because its based on a model of
belief congruence,
trying to avoid cognitve dissonance. Basically ILP adopts
Abduction as already conceived by Charles Sanders Peirce.
He is also the originator of Conceptual Graphs. The
problem is solved for some
background knowledge B and some observation E, in that the
idea is to find a hypothesis H such that:
Consistency: B, H |/- f /* no absurdity */
Completess: B, H |- E
There is also a refinement with positive and negative
observation E+ and E-. The challenge I am positing is to
get some hands-on and see what are the merits of autoencoders
and transformers, and maybe to see whether there is a possible
marriage of autoencoders and transformers with ILP. The
challenge here is that autoencoders and transformers have
no concept of absurdity. The main feature of extrapolation in autoencoders and transformers are:
- Inferencing:
The autoencoder might also tolerate deviations in
the input that are not in the training data, giving
it some inferential capability.
- Generation:
And then choose an output again not in the training
data, giving it some generative capabilities.
There is no measurement against absurdity in the
inferencing and no measurement against absurdity in
the generation. This is also seen in practice, like
when you interact with
ChatGPT, it can halucinate unicorns, and it can
even make mistake, in the halucination, like believing
the are are white chetsnut unicorns.
So the following is possible:
There are unicorns
There are white chestnut unicorns
I see this as a chance that absurdity is possible in
autoencoders and transformers, for many reasons,
especially from my interest in paraconsistent logics.
You can already not assume that training data is
consistent. That there is no ex falso explosion in the
type of autoencoder and transformer machine learning
is rather a benefit than a curse, and somehow gives a
neat solution to many problems, where ILP might
fail by design because it is too strict.
See also:
https://de.wikipedia.org/wiki/Geoffrey_Hintonhttps://de.wikipedia.org/wiki/Demis_Hassabishttps://en.wikipedia.org/wiki/Abductive_reasoning#AbductionMild Shock schrieb:
Very simple challenge conceptually, develop the idea
of Centipawn towards TicTacToe and implement the
game based on learning / training a transformer, and
then executing it. All written in Prolog itself! Optional
bonus exercise, make the execution ИИUƎ style, i.e.
incremental evaluation of the transformer.
Centipawn - Chess Wiki
https://chess.fandom.com/wiki/Centipawn
NNUE - Chess Programming Wiki
https://www.chessprogramming.org/NNUE