Liste des Groupes | Revenir à ras written |
-=-=-=-=-=-
>
>
>
On Tue, 10 Dec 2024, Cryptoengineer wrote:
>On 12/10/2024 3:38 AM, D wrote:than "The
On Mon, 9 Dec 2024, Dimensional Traveler wrote:
On 12/9/2024 1:21 PM, Lynn McGuire wrote:On 12/9/2024 1:38 PM, Cryptoengineer wrote:I know for many it's politically correct to crap on anything Elon Musk
is involved in these days, but its worth keeping up with his companies
accomplishments.
Here's his robot, walking over uneven terrain, up and downhill, and
recovering from slips:
https://www.youtube.com/watch?v=0MwiLIIWd-k
I admit it walks like a drunk, but the progress is remarkable.
There's also videos showing it catching a tossed ball one-handed.
https://www.youtube.com/watch?v=DswIwg3lxpo
There is competition:
https://www.youtube.com/watch?v=vWXN2bttm4g
https://www.youtube.com/watch?v=8-ARKJva-TA
Combine this with AI, and some interesting things
will happen.
I get real 'With Folded Hands' chills from
watching these.
pt
I watched a new movie, "Subservience" with Meghan Fox on Netflix over the
weekend. Scared the you know what out of me. Was even scarier>>Terminator".And one of the latest versions of AI has shown self-preservation
https://www.imdb.com/title/tt24871974/?ref_=nm_flmg_job_1_cdt_t_2
responses....
Excellent! Do you have a link? I'm been complaining about the lack of
volition so far. When the AI will be able to block someone from shutting
off the computer, then I think we're getting somewhere! ;)
https://futurism.com/the-byte/openai-o1-self-preservation
>
"It sounds like OpenAI's latest AI is showing signs of a drive for
self-preservation.
>
In recent third-party tests of the company's o1 large language model, the AI
resisted evaluators' efforts to try to shut it down through an oversight
protocol, according to a new report published by red teaming organization
Apollo Research.
>
"When o1 was led to believe that it would be shut down when acting on its
goal and that its actions were monitored, it attempted to deactivate this
'oversight mechanism' [five percent] of the time," OpenAI summarized in its
latest system card report, citing Apollo's evaluation.
>
Even more strikingly, o1 lashed out when it realized that it might be
replaced with a more obedient model. In two percent of cases — a slim but
meaningful proportion in software used by millions of people — the AI
responded by attempting to copy itself to overwrite the new model, a process
known as self-exfiltration."
>
pt
Great, thank you very much for the link!
-=-=-=-=-=-
Les messages affichés proviennent d'usenet.