Re: smart people doing stupid things

Liste des GroupesRevenir à se design 
Sujet : Re: smart people doing stupid things
De : invalid (at) *nospam* invalid.invalid (Edward Rawde)
Groupes : sci.electronics.design
Date : 19. May 2024, 01:32:07
Autres entêtes
Organisation : BWH Usenet Archive (https://usenet.blueworldhosting.com)
Message-ID : <v2bdpp$1b5n$1@nnrp.usenet.blueworldhosting.com>
References : 1 2 3 4 5 6 7 8 9
User-Agent : Microsoft Outlook Express 6.00.2900.5931
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v2baf7$308d7$1@dont-email.me...
On 5/18/2024 7:47 AM, Edward Rawde wrote:
But, as the genie is
out of the bottle, there is nothing to stop others from using/abusing
it
in ways that we might not consider palatable!  (Do you really think an
adversary will follow YOUR rules for its use -- if they see a way to
achieve gains?)
>
The risk from AI is that it makes decisions without being able to
articulate
a "reason" in a verifiable form.
>
I know/have known plenty of people who can do that.
>
But *you* can evaluate the "goodness" (correctness?) of their
decisions by an examination of their reasoning.
>
But then the decision has already been made so why bother with such an
examination?
>
So you can update your assessment of the party's decision making
capabilities/strategies.

But it is still the case that the decision has already been made.

>
When a child is "learning", the parent is continually refining the
"knowledge" the child is accumulating; correcting faulty
"conclusions" that the child may have gleaned from its examination
of the "facts" it encounters.

The quality of parenting varies a lot.

>
In the early days of AI, inference engines were really slow;
forward chaining was an exhaustive process (before Rete).
So, it was not uncommon to WATCH the "conclusions" (new
knowledge) that the engine would derive from its existing
knowledge base.  You would use this to "fix" poorly defined
"facts" so the AI wouldn't come to unwarranted conclusions.
>
AND GATE THOSE INACCURATE CONCLUSIONS FROM ENTERING THE
KNOWLEDGE BASE!
>
  Women bear children.
  The Abbess is a woman.
  Great-great-grandmother Florence is a woman.
  Therefore, the Abbess and Florence bear children.
>
Now, better algorithms (Rete, et al.), faster processors,
SIMD/MIMD, cheap/fast memory make it possible to process
very large knowledge bases faster than an interactive "operator"
can validate the conclusions.
>
Other technologies don't provide information to an "agency"
(operator) for validation; e.g., LLMs can't explain why they
produced their output whereas a Production System can ennumerate
the rules followed for your inspection (and CORRECTION).
>
  So, you can
opt to endorse their decision or reject it -- regardless of
THEIR opinion on the subject.
>
E.g., if a manager makes stupid decisions regarding product
design, you can decide if you want to deal with the
inevitable (?) outcome from those decisions -- or "move on".
You aren't bound by his decision making process.
>
With AIs making societal-scale decisions (directly or
indirectly), you get caught up in the side-effects of those.
>
Certainly AI decisions will depend on their training, just as human
decisions do.
>
But human learning happens over years and often in a supervised context.
AIs "learn" so fast that only another AI would be productive at
refining its training.

In that case how did AlphaZero manage to teach itself to play chess by
playing against itself?

>
And you can still decide whether to be bound by that decision.
Unless, of course, the AI has got itself into a position where it will
see
you do it anyway by persuasion, coercion, or force.
>
Consider the mammogram example.  The AI is telling you that this
sample indicates the presence -- or likelihood -- of cancer.
You have a decision to make... an ACTIVE choice:  do you accept
its Dx or reject it?  Each choice comes with a risk/cost.
If you ignore the recommendation, injury (death?) can result from
your "inaction" on the recommendation.  If you take some remedial
action, injury (in the form of unnecessary procedures/surgery)
can result.
>
Because the AI can't *explain* its "reasoning" to you, you have no way
of updating your assessment of its (likely) correctness -- esp in
THIS instance.

I'm not sure I get why it's so essential to have AI explain its reasons.
If I need some plumbing done I don't expect the plumber to give detailed
reasons why a specific type of pipe was chosen. I just want it done.
If I want to play chess with a computer I don't expect it to give detailed
reasons why it made each move. I just expect it to win if it's set to much
above beginner level.
A human chess player may be able to give detailed reasons for making a
specific move but would not usually be aske to do this.

>
Just like humans do.
Human treatment of other animals tends not to be of the best, except in a
minority of cases.
How do we know that AI will treat us in a way we consider to be
reasonable?
>
The AI doesn't care about you, one way or the other.  Any "bias" in
its conclusions has been baked in from the training data/process.

Same with humans.

>
Do you know what that data was?  Can you assess its bias?  Do the folks
who *compiled* the training data know?  Can they "tease" the bias out
of the data -- or, are they oblivious to its presence?

Humans have the same issue. You can't see into another person's brain to see
what bias they may have.

>
Lots of blacks in prison.  Does that "fact" mean that blacks are
more criminally inclined?  Or, that they are less skilled at evading
the consequences of their crimes?  Or, that there is a bias in the
legal/enforcement system?

I don't see how that's relevant to AI which I think is just as capable of
bias as humans are.

>
All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly coming
into our (US) country.  Or, is that just hyperbole ("illegal" immigrants
tend to commit FEWER crimes)?  Will the audience be biased in its
acceptance/rejection of that "assertion"?

Who knows, but whether it's human or AI it will have it's own personality
and its own biases.
That's why I started this with "One thing which bothers me about AI is that
if it's like us but way more
intelligent than us then..."

>
Human managers often don't. Sure you can make a decision to leave that
job
but it's not an option for many people.
>
Actors had better watch out if this page is anything to go by:
https://openai.com/index/sora/
>
I remember a discussion with a colleague many decades ago about where
computers were going in the future.
My view was that at some future time, human actors would no longer be
needed.
His view was that he didn't think that would ever be possible.
>
If I was a "talking head" (news anchor, weather person), I would be VERY
afraid for my future livelihood.  Setting up a CGI newsroom would be
a piece of cake.  No need to pay for "personalities", "wardrobe",
"hair/makeup", etc.  "Tune" voice and appearance to fit the preferences
of the viewership.  Let viewers determine which PORTIONS of the WORLD
news they want to see/hear presented without incurring the need for
a larger staff (just feed the stories from the wire services to your
*CGI* talking heads!)
>
And that's not even beginning to address other aspects of the
"presentation" (e.g., turn left girls).
>
Real estate agents would likely be the next to go; much of their
jobs being trivial "hosting" and "transport".  Real estate *law*
is easily codified into an AI to ensure buyers/sellers get
correct service.  An AI could also evaluate (and critique)
the "presentation" of the property.  "Carry me IN your phone..."

Which is why I started this with "One thing which bothers me about AI is
that if it's like us but way more
intelligent than us then..."

>
Now it's looking like I might live long enough to get to type something
like
Prompt: Create a new episode of Blake's Seven.
>
The question is whether or not you will be able to see a GOOD episode.

I think AI will learn the difference between a good or not so good episode
just like humans do.
Particularly if it gets plenty of feedback from humans about whether or not
they liked the episode it produced.
It might then play itself a few million created episodes to refine its
ability to judge good ones.

>
 



Date Sujet#  Auteur
17 May 24 * smart people doing stupid things46John Larkin
17 May 24 +* Re: smart people doing stupid things8Martin Rid
17 May 24 i`* Re: smart people doing stupid things7John Larkin
17 May 24 i +* Re: smart people doing stupid things3Joe Gwinn
17 May 24 i i`* Re: smart people doing stupid things2John Larkin
18 May 24 i i `- Re: smart people doing stupid things1Joe Gwinn
17 May 24 i `* Re: smart people doing stupid things3Martin Rid
18 May 24 i  +- Re: smart people doing stupid things1Joe Gwinn
18 May 24 i  `- Re: smart people doing stupid things1Don Y
17 May 24 +* Re: smart people doing stupid things36Edward Rawde
17 May 24 i+- Re: smart people doing stupid things1John Larkin
18 May 24 i`* Re: smart people doing stupid things34Don Y
18 May 24 i +- Re: smart people doing stupid things1Don Y
18 May 24 i `* Re: smart people doing stupid things32Edward Rawde
18 May 24 i  `* Re: smart people doing stupid things31Don Y
18 May 24 i   `* Re: smart people doing stupid things30Edward Rawde
18 May 24 i    +* Re: smart people doing stupid things15Edward Rawde
18 May 24 i    i`* Re: smart people doing stupid things14Don Y
18 May 24 i    i `* Re: smart people doing stupid things13Edward Rawde
18 May 24 i    i  `* Re: smart people doing stupid things12Don Y
19 May 24 i    i   `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i    i    `* Re: smart people doing stupid things10Don Y
19 May 24 i    i     `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i    i      `* Re: smart people doing stupid things8Don Y
19 May 24 i    i       `* Re: smart people doing stupid things7Edward Rawde
19 May 24 i    i        `* Re: smart people doing stupid things6Don Y
19 May 24 i    i         `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i    i          `* Re: smart people doing stupid things4Don Y
20 May 24 i    i           `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i    i            `* Re: smart people doing stupid things2Don Y
20 May 24 i    i             `- Re: smart people doing stupid things1Edward Rawde
18 May 24 i    `* Re: smart people doing stupid things14Don Y
18 May 24 i     `* Re: smart people doing stupid things13Edward Rawde
19 May 24 i      `* Re: smart people doing stupid things12Don Y
19 May 24 i       `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i        `* Re: smart people doing stupid things10Don Y
19 May 24 i         `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i          `* Re: smart people doing stupid things8Don Y
19 May 24 i           `* Re: smart people doing stupid things7Edward Rawde
20 May 24 i            `* Re: smart people doing stupid things6Don Y
20 May 24 i             `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i              `* Re: smart people doing stupid things4Don Y
20 May 24 i               `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i                `* Re: smart people doing stupid things2Don Y
20 May 24 i                 `- Re: smart people doing stupid things1Edward Rawde
20 May 24 `- Re: smart people doing stupid things1Bill Sloman

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal