Sujet : Re: Human data for AI exhausted
De : fungus (at) *nospam* amongus.com.invalid (Retrograde)
Groupes : misc.news.internet.discussDate : 15. Jan 2025, 03:27:03
Autres entêtes
Message-ID : <67871cf6$9$16$882e4bbb@reader.netnews.com>
References : 1 2 3 4
User-Agent : slrn/1.0.3 (Linux)
On 2025-01-14, Mike Spencer <
mds@bogus.nodomain.nowhere> wrote:
In the same metaphorical way that a corporation, if seen or treated as
a person, is legally mandated to be a psychopath, current AIs --
"generative large language models" in the jargon of the trade -- are
designed to construct apparently knowledgeable assertions from
detecting patterns in a vast corpus of text and present it with
confidence. Of course corporations don't have neurally generated
personalities to suffer from " antisocial personality disorder" [1].
Nor do GLGMs have a body of knowledge, expertise or wisdom from which
their assertions emerge. Neither do they have an internal *belief*
that they *do* have a superior "body of knowledge, expertise or
wisdom" that defines the Dunning-Kruger effect. But their excellent
grammar and extensive vocabulary readily influence the credulous to
infer that nonexistent "knowledge, expertise or wisdom". [2]
Very, very well said, thank you.
I've been thinking recently that the term 'artificial INTELLIGENCE' is a
marketing trick, similar to somehow convincing markets that CLOUD
services are anything other than renting someone else's server.
There's no intelligence to them, but by using (coopting? employing?) the
term, it implies presence of something that is not there.
The ruse may not have been willful, but it has been effective.
If they had called this new technology "advanced pattern matching
repetition" we would not be throwing gobs of money at it.
I look forward to the whole industry cratering, and the likes of Sam
Altman being run off with burning torches and pitchforks.