Sujet : Re: AI is Dehumanizing Technology
De : bencollver (at) *nospam* tilde.pink (Ben Collver)
Groupes : comp.miscDate : 02. Jun 2025, 14:59:39
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <101kaoa$3b0c7$1@dont-email.me>
References : 1 2
User-Agent : slrn/1.0.3 (Linux)
On 2025-06-01, Stefan Ram <
ram@zedat.fu-berlin.de> wrote:
Ben Collver <bencollver@tilde.pink> wrote or quoted:
For example, to create an LLM such as
ChatGPT, you'd start with an enormous quantity of text, then do a lot
of computationally-intense statistical analysis to map out which
words and phrases are most likely to appear near to one another.
Crunch the numbers long enough, and you end up with something similar
to the next-word prediction tool in your phone's text messaging app,
except that this tool can generate whole paragraphs of mostly
plausible-sounding word salad.
>
If you know your stuff and can actually break down AI or LLMs and get
what's risky about them, speak up, because we need people like you.
I remember reading about the dangers of GMO crops. At the time a
common modification was to make corn and soy roundup ready. The
official research said that roundup was safe for human consumption.
I read a story that some found it cheaper to douse surplus roundup on
wheat after the harvest rather than buy the normal dessicants. This
was not the the intended use nor was this the amount of human
exposure reported in the studies. However, it is consistent with the
values that produced roundup: profit being more valuable than health
or safety.
Unintended consequences are bound to come out sideways. Did we need
more expertise in GMOs? No, we needed a different approach.