Liste des Groupes | Revenir à ras written |
On Sat, 14 Dec 2024 23:03:56 +0000, Robert CarnegieThe Training was done with one model using the Internet
<rja.carnegie@gmail.com> wrote:
On 12/12/2024 16:24, Paul S Person wrote:My point is simply that confusing their programming with theirOn Wed, 11 Dec 2024 08:51:11 -0800, Dimensional Traveler>
<dtravel@sonic.net> wrote:
>On 12/10/2024 11:49 PM, The Horny Goat wrote:>On Mon, 9 Dec 2024 18:06:09 -0800, Dimensional Traveler>
<dtravel@sonic.net> wrote:
>>I watched a new movie, "Subservience" with Meghan Fox on Netflix overAnd one of the latest versions of AI has shown self-preservation
the weekend. Scared the you know what out of me. Was even scarier than
"The Terminator".
  https://www.imdb.com/title/tt24871974/?ref_=nm_flmg_job_1_cdt_t_2
>
responses....
Heck that was part of Asimov's Laws of Robotics 50+ years ago
But it wasn't programmed into the AI, it was an emergent behavior.
I think we are being unclear here.
>
The AIs are programmed to learn from a data set.
>
What they say comes from what they were trained on. For this to be
"emergent" (in the most likely intended meaning), it would have to be
something that the training set could never, ever produce. Good luck
showing /that/, with the training set so large and the AI's logic
being very opaque.
>
Referring to their training as "programming" is ... confusing.
Isn't ours?
>
And, the behaviour of the AI /must/ be a product
of its training... unless it has random actions
as well.
training is confusing and should probably be avoided. IOW, semantic
goo strikes again!
Keep in mind that "emergence" is being suggested here. But since, to
the extent that I understand it, these "AIs" just put one word after
the other I see no reason why they shouldn't put these words out in
some situations.
And, yes, I am ignoring "random actions". Which some would claim do
not exist. I see no point in opening another can of worms.
Les messages affichés proviennent d'usenet.