RETRO Project Sebastian Borgeaud et al. - 7 Feb 2022 (Was: Prologers are hurt the most by LLMs)

Liste des GroupesRevenir à cl prolog 
Sujet : RETRO Project Sebastian Borgeaud et al. - 7 Feb 2022 (Was: Prologers are hurt the most by LLMs)
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : comp.lang.prolog
Date : 10. Jan 2025, 11:12:48
Autres entêtes
Message-ID : <vlqrqv$2enuf$2@solani.org>
References : 1
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20
Hi,
I posted this already on sci.math, sci.logic and
sci.physics. Its probably the most important addition
to current LLMs, i.e. Retrieval-Augmented Generation (RAG).
But somehow the morons of MSE don't understand a bit
whats going on around and about the world. They are
quite immune to progress in AI. Like stupid cows.
------------------ cut here --------------------
More details on RAG, see here RETRO Project (*) at t=12:01:
What's wrong with LLMs and what we should be building instead
Tom Dietterich - 10.07.2023
https://youtu.be/cEyHsMzbZBs
So its not a very new technique now appearing in
generative AIs on the market as well. Some chat bots
are even now able to sometimes show more clearly the
used source documents in their answer. The MSE end
user can still edit a citation by hand to conform
more to the SEN format, if this would be the issue.
Also the MSE end user can explicitly now ask a chat
bot for sources, which he will get most of the time.
Or he can give a chat bot a source for review and
discussion. This works also. So there is not anymore
this "remoteness" of an LLM to the actual virtual
world of documents. Its more that they now inhabit the
actual virtual world and interact with it. Another issue
I see is that in certain countries and educational
institutions, it might the case that working with a
chat bot is something that the students learn,
yet they are not officially allowed to use it on
MSE, because MSE policies are biased on outdated
views about generative AI.
See also:
(*) RETRO Project:
Improving language models by retrieving from trillions of tokens
Sebastian Borgeaud et al. - 7 Feb 2022
https://arxiv.org/abs/2112.04426
------------------ cut here --------------------
Bye
Mild Shock schrieb:
Hi,
 Prologers with their pipe dream of Ontologies
with Axioms are most hurt by LLMs that work
more on the basis of Fuzzy Logic.
 Even good old "hardmath" is not immune to
this coping mechanism:
 "I've cast one of my rare votes-to-delete. It is
a self-answer to the OP's off-topic "question".
Rather than improve the original post, the effort
has been made to "promote" some so-called RETRO
Project by linking YouTube and arxiv.org URLs.
Not worth retaining IMHO.
-- hardmath
 https://math.meta.stackexchange.com/a/38051/1482376
 Bye

Date Sujet#  Auteur
10 Jan 25 * Prologers are hurt the most by LLMs6Mild Shock
10 Jan 25 `* RETRO Project Sebastian Borgeaud et al. - 7 Feb 2022 (Was: Prologers are hurt the most by LLMs)5Mild Shock
10 Jan 25  `* SE policy on use of generative Al ignores Retrieval-Augmented Generatio (Was: RETRO Project Sebastian Borgeaud et al. - 7 Feb 2022)4Mild Shock
10 Jan 25   `* Vectors are the new JSON (Was: SE policy on use of generative Al ignores Retrieval-Augmented Generatio)3Mild Shock
10 Jan 25    `* XAI is over and out (Was: Vectors are the new JSON)2Mild Shock
10 Jan 25     `- Academia is retarded (Re: XAI is over and out)1Mild Shock

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal