Re: smart people doing stupid things

Liste des GroupesRevenir à se design 
Sujet : Re: smart people doing stupid things
De : blockedofcourse (at) *nospam* foo.invalid (Don Y)
Groupes : sci.electronics.design
Date : 19. May 2024, 02:41:28
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v2bhs4$31hh9$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10
User-Agent : Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.2
On 5/18/2024 4:32 PM, Edward Rawde wrote:
But then the decision has already been made so why bother with such an
examination?
>
So you can update your assessment of the party's decision making
capabilities/strategies.
 But it is still the case that the decision has already been made.
That doesn't mean that YOU have to abide by it.  Or, even that
the other party has ACTED on the decision.  I.e., decisions are
not immutable.

When a child is "learning", the parent is continually refining the
"knowledge" the child is accumulating; correcting faulty
"conclusions" that the child may have gleaned from its examination
of the "facts" it encounters.
 The quality of parenting varies a lot.
Wouldn't you expect the training for AIs to similarly vary
in capability?

   So, you can
opt to endorse their decision or reject it -- regardless of
THEIR opinion on the subject.
>
E.g., if a manager makes stupid decisions regarding product
design, you can decide if you want to deal with the
inevitable (?) outcome from those decisions -- or "move on".
You aren't bound by his decision making process.
>
With AIs making societal-scale decisions (directly or
indirectly), you get caught up in the side-effects of those.
>
Certainly AI decisions will depend on their training, just as human
decisions do.
>
But human learning happens over years and often in a supervised context.
AIs "learn" so fast that only another AI would be productive at
refining its training.
 In that case how did AlphaZero manage to teach itself to play chess by
playing against itself?
Because it was taught how to learn from its own actions.
It, qualifying as "another AI".
I bake a lot.  My Rxs are continuously evolving.  How did I
manage to "teach myself" how to bake *better* than my earlier
efforts?  There was no external agency (like the creator of the AI)
that endowed me with that skillset or desire.

And you can still decide whether to be bound by that decision.
Unless, of course, the AI has got itself into a position where it will
see
you do it anyway by persuasion, coercion, or force.
>
Consider the mammogram example.  The AI is telling you that this
sample indicates the presence -- or likelihood -- of cancer.
You have a decision to make... an ACTIVE choice:  do you accept
its Dx or reject it?  Each choice comes with a risk/cost.
If you ignore the recommendation, injury (death?) can result from
your "inaction" on the recommendation.  If you take some remedial
action, injury (in the form of unnecessary procedures/surgery)
can result.
>
Because the AI can't *explain* its "reasoning" to you, you have no way
of updating your assessment of its (likely) correctness -- esp in
THIS instance.
 I'm not sure I get why it's so essential to have AI explain its reasons.
Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.?
Why do THEY have to explain their reasons?  You /prima facie/ actions
suggest you HIRED those folks for their expertise; why do you now need
an explanation their actions/decisions instead of just blindly accepting
them?

If I need some plumbing done I don't expect the plumber to give detailed
reasons why a specific type of pipe was chosen. I just want it done.
If you suspect that he may not be competent -- or may be motivated by
greed -- then you would likely want some further information to reinforce
your opinion/suspicions.
We hired folks to paint the house many years ago.  One of the questions
that I would ask (already KNOWING the nominal answer) is "How much paint
do you think it will take?"  This chosen because it sounds innocent
enough that a customer would likely ask it.
One candidate answered "300 gallons".  At which point, I couldn't
contain the afront:  "We're not painting a f***ing BATTLESHIP!"
I.e., his outrageous reply told me:
- he's not competent enough to estimate a job's complexity WHEN
   EVERY ASPECT OF IT IS VISIBLE FOR PRIOR INSPECTION
*or*
- he's a crook thinking he can take advantage of a "dumb homeowner"
In either case, he was disqualified BY his "reasoning".
In the cases where AIs are surpassing human abilities (being able
to perceive relationships that aren't (yet?) apparent to humans,
it seems only natural that you would want to UNDERSTAND their
"reasoning".  Especially in cases where there is no chaining
of facts but, rather, some "hidden pattern" perceived.

If I want to play chess with a computer I don't expect it to give detailed
reasons why it made each move. I just expect it to win if it's set to much
above beginner level.
Then you don't expect to LEARN from the chess program.
When I learned to play chess, my neighbor (teacher) would
make a point of showing me what I had overlooked in my
play and why that led to the consequences that followed.
If I had a record of moves made (from which I could incrementally
recreate the gameboard configuration), I *might* have spotted
my error.
As the teacher (AI in this case) is ultimately a product of
current students (who grow up to become teachers, refined
by their experiences as students), we evolve in our
capabilities as a society.
If the plumber never explains his decisions, then the
homeowner never learns (e.g., don't over-tighten the
hose bibb lest you ruin the washer inside and need
me to come out, again, to replace it!)

A human chess player may be able to give detailed reasons for making a
specific move but would not usually be aske to do this.
If the human was expected to TEACH then those explanations would be
essential TO that teaching!
If the student was wanting to LEARN, then he would select a player that
was capable of teaching!

Just like humans do.
Human treatment of other animals tends not to be of the best, except in a
minority of cases.
How do we know that AI will treat us in a way we consider to be
reasonable?
>
The AI doesn't care about you, one way or the other.  Any "bias" in
its conclusions has been baked in from the training data/process.
 Same with humans.
That's not universally true.  If it was, then all decisions would
be completely motivated for personal gain.

Do you know what that data was?  Can you assess its bias?  Do the folks
who *compiled* the training data know?  Can they "tease" the bias out
of the data -- or, are they oblivious to its presence?
 Humans have the same issue. You can't see into another person's brain to see
what bias they may have.
Exactly.  But, you can pose questions of them and otherwise observe their
behaviors in unrelated areas and form an opinion.
I've a neighbor who loudly claims NOT to be racist.  But, if you take the
whole of your experiences with him and the various comments he has made,
over the years (e.g., not shopping at a particular store because there
are lots of blacks living in the apartment complex across the street
from said store -- meaning lots of them SHOP in that store!), it's
not hard to come to that conclusion.
He also is very vocal about The Border (an hour from here).  Yet,
ALWAYS hires mexicans.  Does he ever check to see if they are here
legally?  Entitled to work?  Or, is he really only concerned with
the price they charge?
When you (I) speak to other neighbors about his behavior, do they
offer similar conclusions as to his "character"?

Lots of blacks in prison.  Does that "fact" mean that blacks are
more criminally inclined?  Or, that they are less skilled at evading
the consequences of their crimes?  Or, that there is a bias in the
legal/enforcement system?
 I don't see how that's relevant to AI which I think is just as capable of
bias as humans are.
Fact contraindicates bias.  So, bias -- anywhere -- s a distortion of "Truth".
Would you want your doctor to give a different type of care to your wife
than to you?  Because of a (hidden?) bias in favor of men (or, against women)?
if you were that female, how would you regard that bias?

All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly coming
into our (US) country.  Or, is that just hyperbole ("illegal" immigrants
tend to commit FEWER crimes)?  Will the audience be biased in its
acceptance/rejection of that "assertion"?
 Who knows, but whether it's human or AI it will have it's own personality
and its own biases.
But we, in assessing "others" strive to identify those biases (unless we want
to blindly embrace them as "comforting/reinforcing").
I visit a friend, daily, who is highly prejudiced, completely opposite
in terms of my political, spiritual, etc. beliefs, hugely different
values, etc.  He is continually critical of my appearance, how I
dress, the hours that I sleep, where I shop, what I spend money on
(and what I *don't*), etc.  And, I just smile and let his comments roll
off me.  SWMBO asks why I spend *any* time with him.
"I find it entertaining!" (!!)
By contrast, I am NOT the sort who belongs to organizations, churches,
etc. ("group think").  It's much easier to see the characteristics of and
flaws *in* these things (and people) from the outside than to wrap yourself
in their culture.  If you are sheeple, you likely enjoy having others
do your thinking FOR you...

And that's not even beginning to address other aspects of the
"presentation" (e.g., turn left girls).
>
Real estate agents would likely be the next to go; much of their
jobs being trivial "hosting" and "transport".  Real estate *law*
is easily codified into an AI to ensure buyers/sellers get
correct service.  An AI could also evaluate (and critique)
the "presentation" of the property.  "Carry me IN your phone..."
 Which is why I started this with "One thing which bothers me about AI is
that if it's like us but way more
intelligent than us then..."
What's to fear, there?  If *you* have the ultimate authority to make
YOUR decisions, then you can choose to ignore the "recommendations"
of an AI just like you can ignore the recommendations of human
"experts"/professionals.

Now it's looking like I might live long enough to get to type something
like
Prompt: Create a new episode of Blake's Seven.
>
The question is whether or not you will be able to see a GOOD episode.
 I think AI will learn the difference between a good or not so good episode
just like humans do.
How would it learn?  Would *it* be able to perceive the "goodness" of
the episode?  If so, why produce one that it didn't think was good?
HUMANS release non-good episodes because there is a huge cost to
making it that has already been incurred.  An AI could just scrub the
disk and start over.  What cost, there?

Particularly if it gets plenty of feedback from humans about whether or not
they liked the episode it produced.
That assumes people will be the sole REACTIVE judge of completed
episodes.  Part of what makes entertainment entertaining is
the unexpected.  Jokes are funny because someone has noticed a
relationship between two ideas in a way that others have not,
previously.  Stories leave lasting impressions when executed well
*or* when a twist catches viewers offguard.
Would an AI create something like Space Balls?  Would it perceive the
humor in the various corny "bits" sprinkled throughout?  How would
YOU explain the humor to it?
The opening sequence to Buckaroo Banzai has the protagonist driving a
"jet car" THROUGH a (solid) mountain, via the 8th dimension.  After
the drag chute deploys and WHILE the car is rolling to a stop, the
driver climbs out through a window.  The camera remains closely
focused on the driver's MASKED face (you have yet to see it unmasked)
while the car continuous to roll away behind him.  WHILE YOUR ATTENTION
IS FOCUSED ON THE ACTOR "REVEALING" HIMSELF, the jet car "diesels"
quietly (because it is now at a distance).  Would the AI appreciate THAT
humor?  It *might* repeat that scene in one of its creations -- but,
only after having SEEN it, elsewhere.  Or, without understanding the
humor and just assuming dieseling to be a common occurrence in ALL
vehicles!

It might then play itself a few million created episodes to refine its
ability to judge good ones.

Date Sujet#  Auteur
17 May 24 * smart people doing stupid things46John Larkin
17 May 24 +* Re: smart people doing stupid things8Martin Rid
17 May 24 i`* Re: smart people doing stupid things7John Larkin
17 May 24 i +* Re: smart people doing stupid things3Joe Gwinn
17 May 24 i i`* Re: smart people doing stupid things2John Larkin
18 May 24 i i `- Re: smart people doing stupid things1Joe Gwinn
17 May 24 i `* Re: smart people doing stupid things3Martin Rid
18 May 24 i  +- Re: smart people doing stupid things1Joe Gwinn
18 May 24 i  `- Re: smart people doing stupid things1Don Y
17 May 24 +* Re: smart people doing stupid things36Edward Rawde
17 May 24 i+- Re: smart people doing stupid things1John Larkin
18 May 24 i`* Re: smart people doing stupid things34Don Y
18 May 24 i +- Re: smart people doing stupid things1Don Y
18 May 24 i `* Re: smart people doing stupid things32Edward Rawde
18 May 24 i  `* Re: smart people doing stupid things31Don Y
18 May 24 i   `* Re: smart people doing stupid things30Edward Rawde
18 May 24 i    +* Re: smart people doing stupid things15Edward Rawde
18 May 24 i    i`* Re: smart people doing stupid things14Don Y
18 May 24 i    i `* Re: smart people doing stupid things13Edward Rawde
18 May 24 i    i  `* Re: smart people doing stupid things12Don Y
19 May 24 i    i   `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i    i    `* Re: smart people doing stupid things10Don Y
19 May 24 i    i     `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i    i      `* Re: smart people doing stupid things8Don Y
19 May 24 i    i       `* Re: smart people doing stupid things7Edward Rawde
19 May 24 i    i        `* Re: smart people doing stupid things6Don Y
19 May 24 i    i         `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i    i          `* Re: smart people doing stupid things4Don Y
20 May 24 i    i           `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i    i            `* Re: smart people doing stupid things2Don Y
20 May 24 i    i             `- Re: smart people doing stupid things1Edward Rawde
18 May 24 i    `* Re: smart people doing stupid things14Don Y
18 May 24 i     `* Re: smart people doing stupid things13Edward Rawde
19 May 24 i      `* Re: smart people doing stupid things12Don Y
19 May 24 i       `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i        `* Re: smart people doing stupid things10Don Y
19 May 24 i         `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i          `* Re: smart people doing stupid things8Don Y
19 May 24 i           `* Re: smart people doing stupid things7Edward Rawde
20 May 24 i            `* Re: smart people doing stupid things6Don Y
20 May 24 i             `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i              `* Re: smart people doing stupid things4Don Y
20 May 24 i               `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i                `* Re: smart people doing stupid things2Don Y
20 May 24 i                 `- Re: smart people doing stupid things1Edward Rawde
20 May 24 `- Re: smart people doing stupid things1Bill Sloman

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal