Re: smart people doing stupid things

Liste des GroupesRevenir à se design 
Sujet : Re: smart people doing stupid things
De : invalid (at) *nospam* invalid.invalid (Edward Rawde)
Groupes : sci.electronics.design
Date : 20. May 2024, 05:45:25
Autres entêtes
Organisation : BWH Usenet Archive (https://usenet.blueworldhosting.com)
Message-ID : <v2eh0n$lt4$1@nnrp.usenet.blueworldhosting.com>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
User-Agent : Microsoft Outlook Express 6.00.2900.5931
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v2ecma$3pjer$3@dont-email.me...
On 5/19/2024 8:22 AM, Edward Rawde wrote:
That depends on the qualities and capabilities that you lump into
"HUMAN intelligence".  Curiosity?  Creativity?  Imagination?  One
can be exceedingly intelligent and of no more "value" than an
encyclopedia!
>
Brains appear to have processing and storage spread thoughout the brain.
There is no separate information processing and separate storage.
Some brain areas may be more processing than storage (cerebellum?)
So AI should be trainable to be of whatever value is wanted which no
doubt
will be maximum value.
>
How do you *teach* creativity?  curiosity?  imagination?  How do you
MEASURE these to see if your teaching is actually accomplishing its goals?

Same way as with a human.

>
I am CERTAIN that AIs will be able to process the information available
to "human practitioners" (in whatever field) at least to the level of
competence that they (humans) can, presently.  It's just a question of
resources thrown at the AI and the time available for it to "respond".
>
But, this ignores the fact that humans are more resourceful at probing
the environment than AIs ("No thumbs!") without mechanical assistance.
>
So AI will get humans to do it. At least initially.
>
No, humans will *decide* if they want to invest the effort to
provide the AI with the data it seeks -- assuming the AI knows
how to express those goals.

Much of what humans do is decided by others.

>
"Greetings, Dr Mengele..."
>
If there comes a time when the AI has its own "effectors",
how do we know it won't engage in "immoral" behaviors?

We don't.

>
Could (would?) an AI decide to explore space?
>
Definitely. And it would not be constrained by the need for a specific
temperature, air composition and pressure, and g.
>
Why would *it* opt to make the trip?

Same reason humans might if it were possible.
Humans sometimes take their pets with them on vacation.

 Surely, it could wait indefinitely
for light-speed data transmission back to earth...

Surely it could also not notice sleeping for many years on the way to
another star.

>
How would it evaluate the cost-benefit tradeoff for such an enterprise?

Same way a human does.

Or, would it just assume that whatever IT wanted was justifiable?

Why would it do anything different from what a human would do if it's
trained to be human like?

>
  Or, the ocean depths?
Or, the rain forest?  Or, would its idea of exploration merely be a
visit to another net-neighbor??
>
Its idea would be what it had become due to its training, just like a
huiman.
>
Humans inherently want to explore.  There is nothing "inherent" in
an AI; you have to PUT those goals into it.

What you do is you make an AI which inherently wants to explore.
You might in some way train it that it's good to explore.

>
Should it want to explore what happens when two nuclear missiles
collide in mid air?  Isn't that additional data that it could use?
Or, what happens if we consume EVEN MORE fossilized carbon.  So it
can tune its climate models for the species that FOLLOW man?
>
Would (could) it consider human needs as important?
>
Doepends on whether it is trained to.
It may in some sense keep us as pets.
>
How do you express those "needs"?  How do you explain morality to
a child?  Love?  Belonging?  Purpose?  How do you measure your success
in instilling these needs/beliefs?

Same way as you do with humans.

>
  (see previous post)
How would it be motivated?
>
Same way humans are.
>
So, AIs have the same inherent NEEDS that humans do?

Why wouldn't they if they're trained to be like humans?

>
The technological part of "AI" is the easy bit.  We already know general
approaches and, with resources, can refine those.  The problem (as I've
tried to suggest above) is instilling some sense of morality in the AI.

Same with humans.

Humans seem to need legal mechanisms to prevent them from engaging in
behaviors that are harmful to society.  These are only partially
successful and rely on The Masses to push back on severe abuses.  Do you
build a shitload of AIs and train them to have independant goals with
a shared goal of preventing any ONE (or more) from interfering with
THEIR "individual" goals?

No, you just make them like humans.

So as AI gets better and better there is clearly a lot to think about.
Otherwise it may become more like humans than we would like.

I don't claim to know how you do this or that with AI.
But I do know that we now seem to be moving towards being able to make
something which matches the complexity of the human central nervous system.
I don't say we are there yet and I don't know when we will be.
In the past it would have been unthinkable that we could really make
something like a human brain because nothing of sufficient complexity could
be made.
It is my view that you don't need to know how a brain works to be able to
make a brain.
You just need something which has sufficient complexity which learns to
become what you want it to become.

You seem to think that humans have something which AI can never have.
I don't. So perhaps we should leave it there.

>
How do you imbue an AI with the idea of "self"?  (so, in the degenerate
case,
it is willing to compromise and join with others to contain an abuser?)
>
  >Would it attempt to think beyond it's
limitations (something humans always do)?  Or, would those be immutable
in its understanding of the world?
>
I don't mean to suggest that AI will become human, or will need to
become
human. It will more likely have its own agenda.
>
Where will that agenda come from?
>
No-one knows exactly. That'y why "One thing which bothers me about AI is
that if it's like us but way more
intelligent than us then..."
>
Maybe we need Gort (The day the earth stood still.) but the problem with
that is will Gort be an American, Chinese, Russian, Other, or none of the
above.
My preference would be none of the above.
>
Will it inherit it from watching B-grade
sci-fi movies?  "Let there be light!"
>
>
>
>
>
 



Date Sujet#  Auteur
17 May 24 * smart people doing stupid things46John Larkin
17 May 24 +* Re: smart people doing stupid things8Martin Rid
17 May 24 i`* Re: smart people doing stupid things7John Larkin
17 May 24 i +* Re: smart people doing stupid things3Joe Gwinn
17 May 24 i i`* Re: smart people doing stupid things2John Larkin
18 May 24 i i `- Re: smart people doing stupid things1Joe Gwinn
17 May 24 i `* Re: smart people doing stupid things3Martin Rid
18 May 24 i  +- Re: smart people doing stupid things1Joe Gwinn
18 May 24 i  `- Re: smart people doing stupid things1Don Y
17 May 24 +* Re: smart people doing stupid things36Edward Rawde
17 May 24 i+- Re: smart people doing stupid things1John Larkin
18 May 24 i`* Re: smart people doing stupid things34Don Y
18 May 24 i +- Re: smart people doing stupid things1Don Y
18 May 24 i `* Re: smart people doing stupid things32Edward Rawde
18 May 24 i  `* Re: smart people doing stupid things31Don Y
18 May 24 i   `* Re: smart people doing stupid things30Edward Rawde
18 May 24 i    +* Re: smart people doing stupid things15Edward Rawde
18 May 24 i    i`* Re: smart people doing stupid things14Don Y
18 May 24 i    i `* Re: smart people doing stupid things13Edward Rawde
18 May 24 i    i  `* Re: smart people doing stupid things12Don Y
19 May 24 i    i   `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i    i    `* Re: smart people doing stupid things10Don Y
19 May 24 i    i     `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i    i      `* Re: smart people doing stupid things8Don Y
19 May 24 i    i       `* Re: smart people doing stupid things7Edward Rawde
19 May 24 i    i        `* Re: smart people doing stupid things6Don Y
19 May 24 i    i         `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i    i          `* Re: smart people doing stupid things4Don Y
20 May 24 i    i           `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i    i            `* Re: smart people doing stupid things2Don Y
20 May 24 i    i             `- Re: smart people doing stupid things1Edward Rawde
18 May 24 i    `* Re: smart people doing stupid things14Don Y
18 May 24 i     `* Re: smart people doing stupid things13Edward Rawde
19 May 24 i      `* Re: smart people doing stupid things12Don Y
19 May 24 i       `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i        `* Re: smart people doing stupid things10Don Y
19 May 24 i         `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i          `* Re: smart people doing stupid things8Don Y
19 May 24 i           `* Re: smart people doing stupid things7Edward Rawde
20 May 24 i            `* Re: smart people doing stupid things6Don Y
20 May 24 i             `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i              `* Re: smart people doing stupid things4Don Y
20 May 24 i               `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i                `* Re: smart people doing stupid things2Don Y
20 May 24 i                 `- Re: smart people doing stupid things1Edward Rawde
20 May 24 `- Re: smart people doing stupid things1Bill Sloman

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal