Sujet : Re: AI is Dehumanizing Technology
De : ram (at) *nospam* zedat.fu-berlin.de (Stefan Ram)
Groupes : comp.miscDate : 01. Jun 2025, 21:08:49
Autres entêtes
Organisation : Stefan Ram
Message-ID : <networks-20250601205900@ram.dialup.fu-berlin.de>
References : 1
Ben Collver <
bencollver@tilde.pink> wrote or quoted:
For example, to create an LLM such as
ChatGPT, you'd start with an enormous quantity of text, then do a lot
of computationally-intense statistical analysis to map out which
words and phrases are most likely to appear near to one another.
Crunch the numbers long enough, and you end up with something similar
to the next-word prediction tool in your phone's text messaging app,
except that this tool can generate whole paragraphs of mostly
plausible-sounding word salad.
I see stuff like that from time to time, but it's really just
a watered-down way of explaining LLMs for kids, and you can't use
it if you're actually trying to make a solid point, since the way
those networks are layered means words turn into concepts, links,
and statements that aren't tied to any one way of saying things.
That ends up getting turned back into language that clearly isn't
just word salad. Sure, stats matter - whether a drug helps 90 or
10 percent of people is a big deal, and knowing statistically common
sentence patterns is exactly what keeps output from turning into
word salad, you want to learn such stats when you learn a language.
The quoted text is from someone trying to make AI criticism
look bad by pretending to be an unqualified critic who just
tosses around stuff that's obviously off base.
If you know your stuff and can actually break down AI or LLMs and get
what's risky about them, speak up, because we need people like you.