Sujet : Long life learning also for real world philosophers? (Re: The anchoring problem in a real world philosopher)
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : comp.lang.prologDate : 08. Aug 2024, 17:18:56
Autres entêtes
Message-ID : <v92nl0$vc0g$2@solani.org>
References : 1 2 3 4
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0 SeaMonkey/2.53.18.2
But I wouldn’t give up so quickly, even
classical expert system theory of the 80’s
had it that an expert system needs somewhere
a knowledge acquisition component. But the
idea there was that the system would simulate
the experts dialog with the advice taker
Von Datenbanken zu Expertsystemen
https://www.orellfuessli.ch/shop/home/artikeldetails/A1051258432and gather further information to complete
the advice. Still this could be inspiring,
don’t stop at not knowing Curry-Howard isomorphism,
go on learn it, never stop! Just like here:
Never Gonna Give You Up
https://www.youtube.com/watch?v=dQw4w9WgXcQMild Shock schrieb:
Hi,
Lets say one milestone in cognitive science,
is the concept of "bounded rationality".
It seems LLMs have some traits that are also
found in humans. For example the anchoring effect
is a psychological phenomenon in which an
individual’s judgements or decisions
are influenced by a reference point or “anchor”
which can be completely irrelevant. Like for example
when discussing Curry Howard isomorphism with
a real world philosopher , one that might
not know Curry Howard isomorphism but
https://en.wikipedia.org/wiki/Anchoring_effect
nevertheless be tempted to hallucinate some nonsense.
One highly cited paper in this respect is Tversky &
Kahneman 1974. R.I.P. Daniel Kahneman,
March 27, 2024. The paper is still cited today:
Artificial Intelligence and Cognitive Biases: A Viewpoint
https://www.cairn.info/revue-journal-of-innovation-economics-2024-2-page-223.htm Maybe using deeper and/or more careful reasoning,
possibly backed up by Prolog engine, could have
a positive effect? Its very difficult also for a
Prolog engine, since there is a trade-off
between producing no answer at all if the software
agent is too careful, and of producing a wealth
of nonsense otherwise.
Bye
Mild Shock schrieb:
>
> Well we all know about this rule:
>
> - Never ask a woman about her weight
>
> - Never ask a woman about her age
>
> There is a similar rule for philosophers:
>
> - Never ask a philosopher what is cognitive science
>
> - Never ask a philosopher what is formula-as-types
>
> Explanation: They like to be the champions of
> pure form like in this paper below, so they
> don’t like other disciplines dealing with pure
> form or even having pure form on the computer.
>
> "Pure” logic, ontology, and phenomenology
> David Woodruff Smith - Revue internationale de philosophie 2003/2
> https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm >
> Mild Shock schrieb:
There are more and more papers of this sort:
>
Reliable Reasoning Beyond Natural Language
To address this, we propose a neurosymbolic
approach that prompts LLMs to extract and encode
all relevant information from a problem statement as
logical code statements, and then use a logic programming
language (Prolog) to conduct the iterative computations of
explicit deductive reasoning.
[2407.11373] Reliable Reasoning Beyond Natural Language
>
The future of Prolog is bright?
>
Mild Shock schrieb:
>
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years >
>
LoL
>
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
>