Re: smart people doing stupid things

Liste des GroupesRevenir à se design 
Sujet : Re: smart people doing stupid things
De : invalid (at) *nospam* invalid.invalid (Edward Rawde)
Groupes : sci.electronics.design
Date : 19. May 2024, 03:53:50
Autres entêtes
Organisation : BWH Usenet Archive (https://usenet.blueworldhosting.com)
Message-ID : <v2bm3g$7tj$1@nnrp.usenet.blueworldhosting.com>
References : 1 2 3 4 5 6 7 8 9 10 11
User-Agent : Microsoft Outlook Express 6.00.2900.5931
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v2bhs4$31hh9$1@dont-email.me...
On 5/18/2024 4:32 PM, Edward Rawde wrote:
But then the decision has already been made so why bother with such an
examination?
>
So you can update your assessment of the party's decision making
capabilities/strategies.
>
But it is still the case that the decision has already been made.
>
That doesn't mean that YOU have to abide by it.  Or, even that
the other party has ACTED on the decision.  I.e., decisions are
not immutable.
>
When a child is "learning", the parent is continually refining the
"knowledge" the child is accumulating; correcting faulty
"conclusions" that the child may have gleaned from its examination
of the "facts" it encounters.
>
The quality of parenting varies a lot.
>
Wouldn't you expect the training for AIs to similarly vary
in capability?

Sure.

>
...
>
Because the AI can't *explain* its "reasoning" to you, you have no way
of updating your assessment of its (likely) correctness -- esp in
THIS instance.
>
I'm not sure I get why it's so essential to have AI explain its reasons.
>
Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.?
Why do THEY have to explain their reasons?  You /prima facie/ actions
suggest you HIRED those folks for their expertise; why do you now need
an explanation their actions/decisions instead of just blindly accepting
them?

That's the point. I don't. I have to accept a doctor's decision on my
treatment because I am not medically trained.

>
If I need some plumbing done I don't expect the plumber to give detailed
reasons why a specific type of pipe was chosen. I just want it done.
>
If you suspect that he may not be competent -- or may be motivated by
greed -- then you would likely want some further information to reinforce
your opinion/suspicions.
>
We hired folks to paint the house many years ago.  One of the questions
that I would ask (already KNOWING the nominal answer) is "How much paint
do you think it will take?"  This chosen because it sounds innocent
enough that a customer would likely ask it.
>
One candidate answered "300 gallons".  At which point, I couldn't
contain the afront:  "We're not painting a f***ing BATTLESHIP!"

I would have said two million gallons just for the pleasure of watching you
go red in the face.

>
I.e., his outrageous reply told me:
- he's not competent enough to estimate a job's complexity WHEN
  EVERY ASPECT OF IT IS VISIBLE FOR PRIOR INSPECTION
*or*
- he's a crook thinking he can take advantage of a "dumb homeowner"
>
In either case, he was disqualified BY his "reasoning".

I would have likely given him the job. Those who are good at painting houses
aren't necessarily good at estimating exactly how much paint they will need.
They just buy more paint as needed.

>
In the cases where AIs are surpassing human abilities (being able
to perceive relationships that aren't (yet?) apparent to humans,
it seems only natural that you would want to UNDERSTAND their
"reasoning".  Especially in cases where there is no chaining
of facts but, rather, some "hidden pattern" perceived.

It's true that you may want to understand their reasoning but it's likely
that you might have to accept that you can't.

>
If I want to play chess with a computer I don't expect it to give
detailed
reasons why it made each move. I just expect it to win if it's set to
much
above beginner level.
>
Then you don't expect to LEARN from the chess program.

Sure I do, but I'm very slow to get better at chess. I tend to make rash
decisions when playing chess.

When I learned to play chess, my neighbor (teacher) would
make a point of showing me what I had overlooked in my
play and why that led to the consequences that followed.
If I had a record of moves made (from which I could incrementally
recreate the gameboard configuration), I *might* have spotted
my error.

I usually spot my error immediately when the computer makes me look stupid.

>
As the teacher (AI in this case) is ultimately a product of
current students (who grow up to become teachers, refined
by their experiences as students), we evolve in our
capabilities as a society.
>
If the plumber never explains his decisions, then the
homeowner never learns (e.g., don't over-tighten the
hose bibb lest you ruin the washer inside and need
me to come out, again, to replace it!)

I don't agree. Learning something like that does not depend on the plumber
explaining his decisions.

>
A human chess player may be able to give detailed reasons for making a
specific move but would not usually be aske to do this.
>
If the human was expected to TEACH then those explanations would be
essential TO that teaching!
>
If the student was wanting to LEARN, then he would select a player that
was capable of teaching!

Sure but so what. Most chess games between humans are not about teaching.

>
Just like humans do.
Human treatment of other animals tends not to be of the best, except in
a
minority of cases.
How do we know that AI will treat us in a way we consider to be
reasonable?
>
The AI doesn't care about you, one way or the other.  Any "bias" in
its conclusions has been baked in from the training data/process.
>
Same with humans.
>
That's not universally true.  If it was, then all decisions would
be completely motivated for personal gain.

Humans generally don't care much for people they have no personal knowledge
of.

>
Do you know what that data was?  Can you assess its bias?  Do the folks
who *compiled* the training data know?  Can they "tease" the bias out
of the data -- or, are they oblivious to its presence?
>
Humans have the same issue. You can't see into another person's brain to
see
what bias they may have.
>
Exactly.  But, you can pose questions of them and otherwise observe their
behaviors in unrelated areas and form an opinion.

If they are, say, a doctor then yes you can ask questions about your
treatment but you can't otherwise observe their behavior.

>
I've a neighbor who loudly claims NOT to be racist.  But, if you take the
whole of your experiences with him and the various comments he has made,
over the years (e.g., not shopping at a particular store because there
are lots of blacks living in the apartment complex across the street
from said store -- meaning lots of them SHOP in that store!), it's
not hard to come to that conclusion.
>
He also is very vocal about The Border (an hour from here).  Yet,
ALWAYS hires mexicans.  Does he ever check to see if they are here
legally?  Entitled to work?  Or, is he really only concerned with
the price they charge?
>
When you (I) speak to other neighbors about his behavior, do they
offer similar conclusions as to his "character"?

I'm not following what that has to do with AI.

>
Lots of blacks in prison.  Does that "fact" mean that blacks are
more criminally inclined?  Or, that they are less skilled at evading
the consequences of their crimes?  Or, that there is a bias in the
legal/enforcement system?
>
I don't see how that's relevant to AI which I think is just as capable of
bias as humans are.
>
Fact contraindicates bias.  So, bias -- anywhere -- s a distortion of
"Truth".
Would you want your doctor to give a different type of care to your wife
than to you?  Because of a (hidden?) bias in favor of men (or, against
women)?
if you were that female, how would you regard that bias?

I may not want it but it's possible it could exist.
It might be the case that I could do nothing about it.

>
All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly
coming
into our (US) country.  Or, is that just hyperbole ("illegal" immigrants
tend to commit FEWER crimes)?  Will the audience be biased in its
acceptance/rejection of that "assertion"?
>
Who knows, but whether it's human or AI it will have it's own personality
and its own biases.
>
But we, in assessing "others" strive to identify those biases (unless we
want
to blindly embrace them as "comforting/reinforcing").
>
I visit a friend, daily, who is highly prejudiced, completely opposite
in terms of my political, spiritual, etc. beliefs, hugely different
values, etc.  He is continually critical of my appearance, how I
dress, the hours that I sleep, where I shop, what I spend money on
(and what I *don't*), etc.  And, I just smile and let his comments roll
off me.  SWMBO asks why I spend *any* time with him.
>
"I find it entertaining!" (!!)

Oh. Now I get why we're having this discussion.

>
By contrast, I am NOT the sort who belongs to organizations, churches,
etc. ("group think").  It's much easier to see the characteristics of and
flaws *in* these things (and people) from the outside than to wrap
yourself
in their culture.  If you are sheeple, you likely enjoy having others
do your thinking FOR you...

I don't enjoy having others do my thinking for me but I'm happy to let them
do so in areas where I have no expertise.

>
And that's not even beginning to address other aspects of the
"presentation" (e.g., turn left girls).
>
Real estate agents would likely be the next to go; much of their
jobs being trivial "hosting" and "transport".  Real estate *law*
is easily codified into an AI to ensure buyers/sellers get
correct service.  An AI could also evaluate (and critique)
the "presentation" of the property.  "Carry me IN your phone..."
>
Which is why I started this with "One thing which bothers me about AI is
that if it's like us but way more
intelligent than us then..."
>
What's to fear, there?  If *you* have the ultimate authority to make
YOUR decisions, then you can choose to ignore the "recommendations"
of an AI just like you can ignore the recommendations of human
"experts"/professionals.

Who says we have the ultimate authority to ignore AI if it gets cleverer
that us?

>
Now it's looking like I might live long enough to get to type something
like
Prompt: Create a new episode of Blake's Seven.
>
The question is whether or not you will be able to see a GOOD episode.
>
I think AI will learn the difference between a good or not so good
episode
just like humans do.
>
How would it learn?  Would *it* be able to perceive the "goodness" of
the episode?  If so, why produce one that it didn't think was good?
HUMANS release non-good episodes because there is a huge cost to
making it that has already been incurred.  An AI could just scrub the
disk and start over.  What cost, there?
>
Particularly if it gets plenty of feedback from humans about whether or
not
they liked the episode it produced.
>
That assumes people will be the sole REACTIVE judge of completed
episodes.  Part of what makes entertainment entertaining is
the unexpected.  Jokes are funny because someone has noticed a
relationship between two ideas in a way that others have not,
previously.  Stories leave lasting impressions when executed well
*or* when a twist catches viewers offguard.
>
Would an AI create something like Space Balls?  Would it perceive the
humor in the various corny "bits" sprinkled throughout?  How would
YOU explain the humor to it?

I would expect it to generate humor the same way humans do.

>
The opening sequence to Buckaroo Banzai has the protagonist driving a
"jet car" THROUGH a (solid) mountain, via the 8th dimension.  After
the drag chute deploys and WHILE the car is rolling to a stop, the
driver climbs out through a window.  The camera remains closely
focused on the driver's MASKED face (you have yet to see it unmasked)
while the car continuous to roll away behind him.  WHILE YOUR ATTENTION
IS FOCUSED ON THE ACTOR "REVEALING" HIMSELF, the jet car "diesels"
quietly (because it is now at a distance).  Would the AI appreciate THAT
humor?  It *might* repeat that scene in one of its creations -- but,
only after having SEEN it, elsewhere.  Or, without understanding the
humor and just assuming dieseling to be a common occurrence in ALL
vehicles!

Same way it might appreciate this:
https://www.youtube.com/watch?v=tYJ5_wqlQPg

>
It might then play itself a few million created episodes to refine its
ability to judge good ones.
>
 



Date Sujet#  Auteur
17 May 24 * smart people doing stupid things46John Larkin
17 May 24 +* Re: smart people doing stupid things8Martin Rid
17 May 24 i`* Re: smart people doing stupid things7John Larkin
17 May 24 i +* Re: smart people doing stupid things3Joe Gwinn
17 May 24 i i`* Re: smart people doing stupid things2John Larkin
18 May 24 i i `- Re: smart people doing stupid things1Joe Gwinn
17 May 24 i `* Re: smart people doing stupid things3Martin Rid
18 May 24 i  +- Re: smart people doing stupid things1Joe Gwinn
18 May 24 i  `- Re: smart people doing stupid things1Don Y
17 May 24 +* Re: smart people doing stupid things36Edward Rawde
17 May 24 i+- Re: smart people doing stupid things1John Larkin
18 May 24 i`* Re: smart people doing stupid things34Don Y
18 May 24 i +- Re: smart people doing stupid things1Don Y
18 May 24 i `* Re: smart people doing stupid things32Edward Rawde
18 May 24 i  `* Re: smart people doing stupid things31Don Y
18 May 24 i   `* Re: smart people doing stupid things30Edward Rawde
18 May 24 i    +* Re: smart people doing stupid things15Edward Rawde
18 May 24 i    i`* Re: smart people doing stupid things14Don Y
18 May 24 i    i `* Re: smart people doing stupid things13Edward Rawde
18 May 24 i    i  `* Re: smart people doing stupid things12Don Y
19 May 24 i    i   `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i    i    `* Re: smart people doing stupid things10Don Y
19 May 24 i    i     `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i    i      `* Re: smart people doing stupid things8Don Y
19 May 24 i    i       `* Re: smart people doing stupid things7Edward Rawde
19 May 24 i    i        `* Re: smart people doing stupid things6Don Y
19 May 24 i    i         `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i    i          `* Re: smart people doing stupid things4Don Y
20 May 24 i    i           `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i    i            `* Re: smart people doing stupid things2Don Y
20 May 24 i    i             `- Re: smart people doing stupid things1Edward Rawde
18 May 24 i    `* Re: smart people doing stupid things14Don Y
18 May 24 i     `* Re: smart people doing stupid things13Edward Rawde
19 May 24 i      `* Re: smart people doing stupid things12Don Y
19 May 24 i       `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i        `* Re: smart people doing stupid things10Don Y
19 May 24 i         `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i          `* Re: smart people doing stupid things8Don Y
19 May 24 i           `* Re: smart people doing stupid things7Edward Rawde
20 May 24 i            `* Re: smart people doing stupid things6Don Y
20 May 24 i             `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i              `* Re: smart people doing stupid things4Don Y
20 May 24 i               `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i                `* Re: smart people doing stupid things2Don Y
20 May 24 i                 `- Re: smart people doing stupid things1Edward Rawde
20 May 24 `- Re: smart people doing stupid things1Bill Sloman

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal