Liste des Groupes | Revenir à ras written |
On 12/15/24 08:37, Paul S Person wrote:On Sat, 14 Dec 2024 23:03:56 +0000, Robert Carnegie>
<rja.carnegie@gmail.com> wrote:
On 12/12/2024 16:24, Paul S Person wrote:On Wed, 11 Dec 2024 08:51:11 -0800, Dimensional Traveler>
<dtravel@sonic.net> wrote:
>On 12/10/2024 11:49 PM, The Horny Goat wrote:>On Mon, 9 Dec 2024 18:06:09 -0800, Dimensional Traveler>
<dtravel@sonic.net> wrote:
>>I watched a new movie, "Subservience" with Meghan Fox on Netflix overAnd one of the latest versions of AI has shown self-preservation
the weekend. Scared the you know what out of me. Was even scarier than
"The Terminator".
  https://www.imdb.com/title/tt24871974/?ref_=nm_flmg_job_1_cdt_t_2
>
responses....
Heck that was part of Asimov's Laws of Robotics 50+ years ago
But it wasn't programmed into the AI, it was an emergent behavior.
I think we are being unclear here.
>
The AIs are programmed to learn from a data set.
>
What they say comes from what they were trained on. For this to be
"emergent" (in the most likely intended meaning), it would have to be
something that the training set could never, ever produce. Good luck
showing /that/, with the training set so large and the AI's logic
being very opaque.
>
Referring to their training as "programming" is ... confusing.
Isn't ours?
>
And, the behaviour of the AI /must/ be a product
of its training... unless it has random actions
as well.
My point is simply that confusing their programming with their
training is confusing and should probably be avoided. IOW, semantic
goo strikes again!
Keep in mind that "emergence" is being suggested here. But since, to
the extent that I understand it, these "AIs" just put one word after
the other I see no reason why they shouldn't put these words out in
some situations.
And, yes, I am ignoring "random actions". Which some would claim do
not exist. I see no point in opening another can of worms.
The Training was done with one model using the Internet
and the internet is full of lies, half-truths and real fiction.
I bet the AI in question learned from one or more old SF stories
or movies like the Forbin Project or Colossus about computers
that take over the World to ensure their own survival.
Training AI or Artifically Stupid machines must be
done with as accurate a source of information as possible.
Machines are great diagnosticians when trained on medical
information. I bet they could do other fields as well but
they have to be trained on accurate data.
`Humans on the other hand live ememshed in the
myths of their culture and some myths are in no way
realistic. This creates foolish assumption and ideas because
the myths of one culture are not the myths of another.
Les messages affichés proviennent d'usenet.