Re: smart people doing stupid things

Liste des GroupesRevenir à se design 
Sujet : Re: smart people doing stupid things
De : blockedofcourse (at) *nospam* foo.invalid (Don Y)
Groupes : sci.electronics.design
Date : 20. May 2024, 04:31:37
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v2ecma$3pjer$3@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
User-Agent : Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.2
On 5/19/2024 8:22 AM, Edward Rawde wrote:
That depends on the qualities and capabilities that you lump into
"HUMAN intelligence".  Curiosity?  Creativity?  Imagination?  One
can be exceedingly intelligent and of no more "value" than an
encyclopedia!
 Brains appear to have processing and storage spread thoughout the brain.
There is no separate information processing and separate storage.
Some brain areas may be more processing than storage (cerebellum?)
So AI should be trainable to be of whatever value is wanted which no doubt
will be maximum value.
How do you *teach* creativity?  curiosity?  imagination?  How do you
MEASURE these to see if your teaching is actually accomplishing its goals?

I am CERTAIN that AIs will be able to process the information available
to "human practitioners" (in whatever field) at least to the level of
competence that they (humans) can, presently.  It's just a question of
resources thrown at the AI and the time available for it to "respond".
>
But, this ignores the fact that humans are more resourceful at probing
the environment than AIs ("No thumbs!") without mechanical assistance.
 So AI will get humans to do it. At least initially.
No, humans will *decide* if they want to invest the effort to
provide the AI with the data it seeks -- assuming the AI knows
how to express those goals.
"Greetings, Dr Mengele..."
If there comes a time when the AI has its own "effectors",
how do we know it won't engage in "immoral" behaviors?

Could (would?) an AI decide to explore space?
 Definitely. And it would not be constrained by the need for a specific
temperature, air composition and pressure, and g.
Why would *it* opt to make the trip?  Surely, it could wait indefinitely
for light-speed data transmission back to earth...
How would it evaluate the cost-benefit tradeoff for such an enterprise?
Or, would it just assume that whatever IT wanted was justifiable?

  Or, the ocean depths?
Or, the rain forest?  Or, would its idea of exploration merely be a
visit to another net-neighbor??
 Its idea would be what it had become due to its training, just like a
huiman.
Humans inherently want to explore.  There is nothing "inherent" in
an AI; you have to PUT those goals into it.
Should it want to explore what happens when two nuclear missiles
collide in mid air?  Isn't that additional data that it could use?
Or, what happens if we consume EVEN MORE fossilized carbon.  So it
can tune its climate models for the species that FOLLOW man?

Would (could) it consider human needs as important?
 Doepends on whether it is trained to.
It may in some sense keep us as pets.
How do you express those "needs"?  How do you explain morality to
a child?  Love?  Belonging?  Purpose?  How do you measure your success
in instilling these needs/beliefs?

  (see previous post)
How would it be motivated?
 Same way humans are.
So, AIs have the same inherent NEEDS that humans do?
The technological part of "AI" is the easy bit.  We already know general
approaches and, with resources, can refine those.  The problem (as I've
tried to suggest above) is instilling some sense of morality in the AI.
Humans seem to need legal mechanisms to prevent them from engaging in
behaviors that are harmful to society.  These are only partially
successful and rely on The Masses to push back on severe abuses.  Do you
build a shitload of AIs and train them to have independant goals with
a shared goal of preventing any ONE (or more) from interfering with
THEIR "individual" goals?
How do you imbue an AI with the idea of "self"?  (so, in the degenerate case,
it is willing to compromise and join with others to contain an abuser?)

  >Would it attempt to think beyond it's
limitations (something humans always do)?  Or, would those be immutable
in its understanding of the world?
>
I don't mean to suggest that AI will become human, or will need to become
human. It will more likely have its own agenda.
>
Where will that agenda come from?
 No-one knows exactly. That'y why "One thing which bothers me about AI is
that if it's like us but way more
intelligent than us then..."
 Maybe we need Gort (The day the earth stood still.) but the problem with
that is will Gort be an American, Chinese, Russian, Other, or none of the
above.
My preference would be none of the above.
 
Will it inherit it from watching B-grade
sci-fi movies?  "Let there be light!"
>
>
 

Date Sujet#  Auteur
17 May 24 * smart people doing stupid things46John Larkin
17 May 24 +* Re: smart people doing stupid things8Martin Rid
17 May 24 i`* Re: smart people doing stupid things7John Larkin
17 May 24 i +* Re: smart people doing stupid things3Joe Gwinn
17 May 24 i i`* Re: smart people doing stupid things2John Larkin
18 May 24 i i `- Re: smart people doing stupid things1Joe Gwinn
17 May 24 i `* Re: smart people doing stupid things3Martin Rid
18 May 24 i  +- Re: smart people doing stupid things1Joe Gwinn
18 May 24 i  `- Re: smart people doing stupid things1Don Y
17 May 24 +* Re: smart people doing stupid things36Edward Rawde
17 May 24 i+- Re: smart people doing stupid things1John Larkin
18 May 24 i`* Re: smart people doing stupid things34Don Y
18 May 24 i +- Re: smart people doing stupid things1Don Y
18 May 24 i `* Re: smart people doing stupid things32Edward Rawde
18 May 24 i  `* Re: smart people doing stupid things31Don Y
18 May 24 i   `* Re: smart people doing stupid things30Edward Rawde
18 May 24 i    +* Re: smart people doing stupid things15Edward Rawde
18 May 24 i    i`* Re: smart people doing stupid things14Don Y
18 May 24 i    i `* Re: smart people doing stupid things13Edward Rawde
18 May 24 i    i  `* Re: smart people doing stupid things12Don Y
19 May 24 i    i   `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i    i    `* Re: smart people doing stupid things10Don Y
19 May 24 i    i     `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i    i      `* Re: smart people doing stupid things8Don Y
19 May 24 i    i       `* Re: smart people doing stupid things7Edward Rawde
19 May 24 i    i        `* Re: smart people doing stupid things6Don Y
19 May 24 i    i         `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i    i          `* Re: smart people doing stupid things4Don Y
20 May 24 i    i           `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i    i            `* Re: smart people doing stupid things2Don Y
20 May 24 i    i             `- Re: smart people doing stupid things1Edward Rawde
18 May 24 i    `* Re: smart people doing stupid things14Don Y
18 May 24 i     `* Re: smart people doing stupid things13Edward Rawde
19 May 24 i      `* Re: smart people doing stupid things12Don Y
19 May 24 i       `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i        `* Re: smart people doing stupid things10Don Y
19 May 24 i         `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i          `* Re: smart people doing stupid things8Don Y
19 May 24 i           `* Re: smart people doing stupid things7Edward Rawde
20 May 24 i            `* Re: smart people doing stupid things6Don Y
20 May 24 i             `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i              `* Re: smart people doing stupid things4Don Y
20 May 24 i               `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i                `* Re: smart people doing stupid things2Don Y
20 May 24 i                 `- Re: smart people doing stupid things1Edward Rawde
20 May 24 `- Re: smart people doing stupid things1Bill Sloman

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal