Liste des Groupes | Revenir à cl c++ |
On 12/12/2024 12:32 AM, Muttley@DastardlyHQ.org wrote:It's like the slightly-affected scientist whoOn Wed, 11 Dec 2024 09:41:14 -0800>
Ross Finlayson <ross.a.finlayson@gmail.com> wibbled:and friends have been running "AI" since for example>
the '80's at least since Moravec said "you know
we've had human-level AI for a while, ....".
Either he was refering to humans or he was smoking some VERY expensive
weed.
>It's moreso that various un-truths about the actual>
implementations of AI offerings are designed to
reduce liability with regards to things like
"well it doesn't really think, and thus be
responsible for its actions, the dictates".
They don't think, or at least ChatGPT. It seems very clever at first and
there's no doubt its a quantum leap in AI, but the more you use it the
more
you realise how inconsistent it is and the apparent lack of
understanding of
some basic concepts.
>
I think it's moreso that the hints you give
are what it figures to give you, vis-a-vis
making sure to command that it gives both
the pros and cons, as it were.
>
There were on-line psychiatrists since the '60's,
and bots have been fooling simple humans since
decades, and furthermore, "agents and actors in
an ecosystem including large-language models",
are a lot more than dumb, and usual online
chat-bots these days can be directed to demomstrate
very thorough and correct reasoning.
>
Mostly it's that when you give it mistaken inputs,
then it just freely piles upon the wrong ideas,
"imagining" as it were, "playing along", or,
you know, "GIGO".
>
Mostly the advertising of these systems as not
culpable for any their outputs, is just to avoid
liability because otherwise they'd have to be
as responsible for their actions and statements
as any individual, the massive hegemony of
"Google" which you'll notice has that just like
search, that ChatGPT and Gemini have the same back-end.
>
So, whatever's not in its context it feels free
to "imagine", that you need to know how to give
it a chance to demonstrate its assumptions instead
of just imagining commonly held unstated assumptions,
then verify going along that the assumptions,
the stipulations, the axioms as they may be,
are in the memory, of the sessions.
>
I.e., it's a very fragmented hive-mind, and each
session is a blank slate.
>
>
Assuming that a massive online information-system
couldn't emulate most human reasoning, would not
necessarily be a rational expectation.
>
>
Les messages affichés proviennent d'usenet.