Re: Human data for AI exhausted

Liste des GroupesRevenir à mni discuss 
Sujet : Re: Human data for AI exhausted
De : mds (at) *nospam* bogus.nodomain.nowhere (Mike Spencer)
Groupes : misc.news.internet.discuss
Date : 15. Jan 2025, 00:10:50
Autres entêtes
Organisation : Bridgewater Institute for Advanced Study - Blacksmith Shop
Message-ID : <87wmext2it.fsf@enoch.nodomain.nowhere>
References : 1 2 3
User-Agent : Gnus v5.7/Emacs 20.7
Jukka Lahtinen <jtfjdehf@hotmail.com.invalid> writes:

JAB <here@is.invalid> writes:
>
On 12 Jan 2025 04:02:42 GMT, Retrograde <fungus@amongus.com.invalid>
wrote:
 
Artificial intelligence companies have run out of data for training their
models
 
AI can be useful in specific applications, but I would not call it
"intelligence"
 
Quite often it looks more like artificial stupidity.

Because that is just what it is.

As long as AI has been a thing, Artificial Stupidity has been a
repeated jape.

Trouble is, what we're now calling AI *is* artificial stupidity.  It's
sort of in the line of "idiot savant" because the neural net
constructs do have a remarkable ability to detect or compare
patterns.

But now that they can construct convincing, grammatically correct
language, the fact that they effectively exhibit the Dunning-Kruger
effect is lost in the impact of well-written English.

In the same metaphorical way that a corporation, if seen or treated as
a person, is legally mandated to be a psychopath, current AIs --
"generative large language models" in the jargon of the trade -- are
designed to construct apparently knowledgeable assertions from
detecting patterns in a vast corpus of text and present it with
confidence. Of course corporations don't have neurally generated
personalities to suffer from " antisocial personality disorder" [1].
Nor do GLGMs have a body of knowledge, expertise or wisdom from which
their assertions emerge.  Neither do they have an internal *belief* that
they *do* have a superior "body of knowledge, expertise or wisdom"
that defines the Dunning-Kruger effect.  But their excellent grammar
and extensive vocabulary readily influence the credulous to infer that
nonexistent "knowledge, expertise or wisdom". [2]

These GLLMs appear to be a sort of automated Delphi process.  The
problem is that, in nearly every respect, they violate  or fail to
meet the criteria for a Delphi process to operate correctly and do
exhibit the failure modes that a mismanaged Delphi process encounters.

I'm not going to try to write an analysis of Delphi methods here for
comparison.  For further background, you can look at:

    https://en.wikipedia.org/wiki/Delphi_method
    https://en.wikipedia.org/wiki/The_Wisdom_of_Crowds
    https://en.wikipedia.org/wiki/Large_language_model

People can be intellectually and/or emotionally entrained by language
that is delivered with confidence while being largely or entirely
nonsensical.  Well-written language presented authoritatively is, in a
sense, automatically convincing.  I'm always encouraged to attribute a
little extra credibility to text which exhibits errors well known to
result from hasty keyboard editing of text; so far, GLLMs don't do
that.  I surmise that they soon will.


[1] The clinical term for psychopathy or sociopathy according to DSM-IV.
    I assume that's unchanged in DSM-V.

[2] See also: Bobby Azarian,
    https://www.rawstory.com/raw-investigates/stupidity-threat/
--
Mike Spencer                  Nova Scotia, Canada

Date Sujet#  Auteur
12 Jan 25 * Human data for AI exhausted7Retrograde
13 Jan 25 +* Re: Human data for AI exhausted5JAB
14 Jan 25 i`* Re: Human data for AI exhausted4Jukka Lahtinen
15 Jan 25 i `* Re: Human data for AI exhausted3Mike Spencer
15 Jan 25 i  +- Re: Human data for AI exhausted1JAB
15 Jan 25 i  `- Re: Human data for AI exhausted1Retrograde
13 Jan 25 `- Re: Human data for AI exhausted1Michael Trew

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal