Liste des Groupes | Revenir à col advocacy |
On 2/13/25 2:10 AM, rbowman wrote:Nope. Volition, will to live, drive, goals are completely missing. The best trick to find out if you're talking with an AI is to write nothing. A human will write "hello" after a few seconds. The AI will just sit there waiting for input.On Wed, 12 Feb 2025 22:58:30 -0500, WokieSux282@ud0s4.net wrote:>
>Neural networks can likely do "someone in there" even better,So far there is nobody in there for CNNs. You know all the pieces and they
eventually. At the moment LLMs get most of the funding so NNs are a
bit behind the curve. New/better hardware and paradigms are needed
but WILL eventually arrive.
don't magically start breathing when you put them together. It is true the
whole system is a bit of a black box but it is describable.
>
Well, I agree about "CNNs" :-)
>
As for LLMs ... dunno. Get enough stuff going there and
something very hard, maybe impossible, to distinguish
from "someone in there" may be realized. Then what do
we do - ruthlessly pull the plug ?
>The problem I see is already starting -- turning them into weapons and>
letting them run autonomously. One of the 'hello world' applications is
training a NN on a huge number of labeled photos of cats and dogs and the
models perform very well.
NNs - kinda modeling real-life neurons - will eventually
result in "someone in there" ... maybe more recognizable
than anything the LLMs produce.
>
As for weapons - that's well in progress now, with China
ahead of the game according to various reports. Fully
autonomous weapons are game-changers. Just tell 'em to
"ID Enemy. KILL Enemy" is about all it'd take. In theory
such devices could be extremely fast, strong, accurate.
Remember the Hunter-Killer drones from "Terminator" -
that sort of thing (likely a bit smaller) and they would
NOT miss shots.
>The metrics are sort of a truth table, with false negatives, false>
positives, and correct identification. It's a stochastic process so you're
looking at 'good enough', maybe 97%. Say I hate dogs, set up a camera in
the yard, and shoot all the dogs. A few dogs are going to slide and I'll
kill a few cats.
Oh well ... a few friendly-fire casualties are expected ...
>Now hand this to the military. The AI decides it sees a terrorist and a>
Reaper puts a Hellfire missile up his ass. You get a few school kids, but
that's life.
Yep. Some may freak about that, but that's how it goes.
It's doubly true for people like Hamas who kinda literally
stacked up babies as sandbags.
>The Israelis may already be doing something like that or maybe they just>
randomly kill people, who knows?
Give AI enhanced facial recognition to the cops -- won't that be fun.
Enter 'Minority Report'.
Oh, there ARE very very dark possibilities .....
>
Coming soon to a street near you.
>
As for 'Minority', they ARE training AIs to "identify
emotional states" from various cues. In theory the bots
will spot your malicious intent, perhaps before even you
realize you were feeling malicious. "The Computer Said So"
is all the justification The State needs ...
>
The "a few mistakes are OK" logic WILL be applied.
>
Les messages affichés proviennent d'usenet.