Sujet : Re: ANN: Dogelog Player 1.2.6 (Segmented Fileaccess)
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : comp.lang.prologDate : 13. Feb 2025, 19:11:39
Autres entêtes
Message-ID : <volckq$6r8g$1@solani.org>
References : 1 2 3 4 5
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20
An autoencoder learns two functions: an encoding
function that transforms the input data, and a
decoding function that recreates the input data
from the encoded representation. We approach
autoencoders via our already developed SAT Learning
in the Prolog programming language.
Switching from a marginal maximizer to a conditional
maximizer gives better results but also requires a
more costly and slower optimizer. Maximum entropy
methods were already suggest by Peter Cheeseman in
1987. Mostlikely flawed since there is not yet a
feedback loop from the decoder to the encoder.
Maximum Entropy in SAT Autoencoding
https://x.com/dogelogch/status/1890093860782764409Maximum Entropy in SAT Autoencoding
https://www.facebook.com/groups/dogelogMild Shock schrieb:
Dogelog Player is a Prolog system for JavaScript,
Python and Java. It is 100% written in Prolog itself.
We present an enhancement to DCG translation. It uses
unification spilling to reduce the number of needed
unify (=)/2 calls and intermediate variables.
Unification spilling can be readily implemented by
performing unification (=)/2 during DCG translation.
Careful spilling without breaking steadfastness gave
us a 10% — 25% speed increase not only for the calculator
example but also for the Albufeira transpiler.
See also:
DCG Translation with Unification Spilling
https://x.com/dogelogch/status/1889270444647182542
DCG Translation with Unification Spilling
https://www.facebook.com/groups/dogelog