Re: smart people doing stupid things

Liste des GroupesRevenir à se design 
Sujet : Re: smart people doing stupid things
De : blockedofcourse (at) *nospam* foo.invalid (Don Y)
Groupes : sci.electronics.design
Date : 19. May 2024, 11:43:28
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v2chkc$3anli$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12
User-Agent : Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.2
On 5/18/2024 6:53 PM, Edward Rawde wrote:
Because the AI can't *explain* its "reasoning" to you, you have no way
of updating your assessment of its (likely) correctness -- esp in
THIS instance.
>
I'm not sure I get why it's so essential to have AI explain its reasons.
>
Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.?
Why do THEY have to explain their reasons?  You /prima facie/ actions
suggest you HIRED those folks for their expertise; why do you now need
an explanation their actions/decisions instead of just blindly accepting
them?
 That's the point. I don't. I have to accept a doctor's decision on my
treatment because I am not medically trained.
So, that means you can't make sense of anything he would say to you to
justify his decision?  Recall, everyone has bias -- including doctors.
If he assumes you will fail to follow his instructions/recommendations
if he tells you what he would LIKE you to do and, instead, gives you
the recommendation for what he feels you will LIKELY do, you've been
shortchanged.
I asked my doctor what my ideal weight should be.  He told me.
The next time I saw him, I weighed my ideal weight.  He was surprised
as few patients actually heeded his advice on that score.
Another time, he wanted to prescribe a medication for me.  I told
him I would fail to take it -- not deliberately but just because
I'm not the sort who remembers to take "pills".  Especially if
"ongoing" (not just a two week course for an infection/malady).
He gave me an alternative "solution" which eliminated the need for
the medication, yielding the same result without any "side effects".
SWMBO has a similar relationship with her doctor.  Tell us the
"right" way to solve the problem, not the easy way because you think
we'll behave like your "nominal" patients.
The same is true of one of our dogs.  We made changes that the
vet suggested (to avoid medication) and a month later the vet
was flabbergasted to see the difference.
Our attitude is that you should EDUCATE us and let US make the
decisions for our care, based on our own value systems, etc.

If I need some plumbing done I don't expect the plumber to give detailed
reasons why a specific type of pipe was chosen. I just want it done.
>
If you suspect that he may not be competent -- or may be motivated by
greed -- then you would likely want some further information to reinforce
your opinion/suspicions.
>
We hired folks to paint the house many years ago.  One of the questions
that I would ask (already KNOWING the nominal answer) is "How much paint
do you think it will take?"  This chosen because it sounds innocent
enough that a customer would likely ask it.
>
One candidate answered "300 gallons".  At which point, I couldn't
contain the afront:  "We're not painting a f***ing BATTLESHIP!"
 I would have said two million gallons just for the pleasure of watching you
go red in the face.
No "anger" or embarassment, here.  We just couldn't contain the fact
that we would NOT be calling him back to do the job!

I.e., his outrageous reply told me:
- he's not competent enough to estimate a job's complexity WHEN
   EVERY ASPECT OF IT IS VISIBLE FOR PRIOR INSPECTION
*or*
- he's a crook thinking he can take advantage of a "dumb homeowner"
>
In either case, he was disqualified BY his "reasoning".
 I would have likely given him the job. Those who are good at painting houses
aren't necessarily good at estimating exactly how much paint they will need.
They just buy more paint as needed.
One assumes that he has painted OTHER homes and has some recollection of
the amount of paint purchased for the job.  And, if this is his livelihood,
one assumes that such activities would have been *recent* -- not months ago
(how has he supported himself "without work"?).
Is my house considerably larger or smaller than the other houses that you
have painted?  (likely not)  Does it have a different surface texture
that could alter the "coverage" rate?  (again, likely not)  So, shouldn't you
be able to ballpark an estimate?  "What did the LAST HOUSE you painted require
by way of paint quantity?"
Each engineering job that I take on differs from all that preceded it
(by my choice).  Yet, I have to come up with a timeframe and a "labor
estimate" within that timeframe as I do only fixed cost jobs.  If
I err on either score, I either lose out on the bid *or* lose
"money" on the effort.  Yet, despite vastly different designs, I
can still get a good ballpark estimate of the job a priori so that
neither I nor the client are "unhappy".
I'd not be "off" by an order of magnitude (as the paint estimate was!)

In the cases where AIs are surpassing human abilities (being able
to perceive relationships that aren't (yet?) apparent to humans,
it seems only natural that you would want to UNDERSTAND their
"reasoning".  Especially in cases where there is no chaining
of facts but, rather, some "hidden pattern" perceived.
 It's true that you may want to understand their reasoning but it's likely
that you might have to accept that you can't.
The point is that NO ONE can!  Even the folks who designed and implemented
the AI are clueless.  AND THEY KNOW IT.
"It *seems* to give correct results when fed the test cases...  We *expected*
this but have no idea WHY a particular result was formulated as it was!"

If I want to play chess with a computer I don't expect it to give
detailed
reasons why it made each move. I just expect it to win if it's set to
much
above beginner level.
>
Then you don't expect to LEARN from the chess program.
 Sure I do, but I'm very slow to get better at chess. I tend to make rash
decisions when playing chess.
Then your cost of learning is steep.  I want to know how to RECOGNIZE
situations that will give me opportunities OR risks so I can pursue or
avoid them.  E.g., I don't advance the King tot he middle of the
board just to "see what happens"!

When I learned to play chess, my neighbor (teacher) would
make a point of showing me what I had overlooked in my
play and why that led to the consequences that followed.
If I had a record of moves made (from which I could incrementally
recreate the gameboard configuration), I *might* have spotted
my error.
 I usually spot my error immediately when the computer makes me look stupid.
But you don't know how you GOT to that point so you don't know how
to avoid that situation in the first place!  was it because you
sacrificed too many pieces too early?  Or allowed protections to
be drawn out, away from the King?  Or...
You don't learn much from *a* (bad) move.  You learn from
bad strategies/sequences of moves.

As the teacher (AI in this case) is ultimately a product of
current students (who grow up to become teachers, refined
by their experiences as students), we evolve in our
capabilities as a society.
>
If the plumber never explains his decisions, then the
homeowner never learns (e.g., don't over-tighten the
hose bibb lest you ruin the washer inside and need
me to come out, again, to replace it!)
 I don't agree. Learning something like that does not depend on the plumber
explaining his decisions.
You have someone SKILLED IN THE ART at hand.  Instead of asking HIM,
you're going to LATER take the initiative to research the cause of
your problem?  Seems highly inefficient.
A neighbor was trying to install some stops and complained that he couldn't
tighten down the nuts sufficiently:  "Should it be THIS difficult?"  I
pulled the work apart and showed him the *tiny* mistake he was making
in installing the compression fittings -- and why that was manifesting as
"hard to tighten".  I could have, instead, fixed the problem for him and
returned home -- him, none the wiser.

A human chess player may be able to give detailed reasons for making a
specific move but would not usually be aske to do this.
>
If the human was expected to TEACH then those explanations would be
essential TO that teaching!
>
If the student was wanting to LEARN, then he would select a player that
was capable of teaching!
 Sure but so what. Most chess games between humans are not about teaching.
So, everything in the world is a chess game?  Apparently so as you
don't seem to want to learn from your plumber, doctor, chessmate, ...

The AI doesn't care about you, one way or the other.  Any "bias" in
its conclusions has been baked in from the training data/process.
>
Same with humans.
>
That's not universally true.  If it was, then all decisions would
be completely motivated for personal gain.
 Humans generally don't care much for people they have no personal knowledge
of.
I guess all the bruhaha about the middle east is a hallucination?  Or,
do you think all of the people involved overseas are personally related to
the folks around the world showing interest in their plight?
Humans tend to care about others and expect others to care about *them*.
Else, why "campaign" about any cause?  *I* don't have breast cancer so
what point in the advertisements asking for donations?  I don't know
any "wounded warriors" so why is someone wasting money on those ads
instead of addressing those *needs*?  Clearly, these people THINK that
people care about other people else they wouldn't be asking for "gifts"!

Do you know what that data was?  Can you assess its bias?  Do the folks
who *compiled* the training data know?  Can they "tease" the bias out
of the data -- or, are they oblivious to its presence?
>
Humans have the same issue. You can't see into another person's brain to
see
what bias they may have.
>
Exactly.  But, you can pose questions of them and otherwise observe their
behaviors in unrelated areas and form an opinion.
 If they are, say, a doctor then yes you can ask questions about your
treatment but you can't otherwise observe their behavior.
I watch the amount of time my MD gives me above and beyond the "15 minute slot"
that his office would PREFER to constrain him.  I watch my dentist respond to
calls to his PERSONAL cell phone WHILE OUT OF TOWN.  I see the bicycle that
SWMBO's MD rides to work each day.
These people aren't highlighting these aspects of their behavior.  But,
they aren't hiding them, either.  Anyone observant would "notice".

I've a neighbor who loudly claims NOT to be racist.  But, if you take the
whole of your experiences with him and the various comments he has made,
over the years (e.g., not shopping at a particular store because there
are lots of blacks living in the apartment complex across the street
from said store -- meaning lots of them SHOP in that store!), it's
not hard to come to that conclusion.
>
He also is very vocal about The Border (an hour from here).  Yet,
ALWAYS hires mexicans.  Does he ever check to see if they are here
legally?  Entitled to work?  Or, is he really only concerned with
the price they charge?
>
When you (I) speak to other neighbors about his behavior, do they
offer similar conclusions as to his "character"?
 I'm not following what that has to do with AI.
It speaks to bias.  Bias that people have and either ignore or
deny, despite it being obvious to others.
Those "others" will react to you WITH consideration of that bias
factored into their actions.
A neighbor was (apparently) abusing his wife.  While "his side of
the story" remains to be told, most of us have decided that this
is consistent enough with his OTHER behaviors that it is more
likely than not.  If asked to testify, he can be reasonably sure
none will point to any "good deeds" that he has done (as he hasn't
DONE any!)

Lots of blacks in prison.  Does that "fact" mean that blacks are
more criminally inclined?  Or, that they are less skilled at evading
the consequences of their crimes?  Or, that there is a bias in the
legal/enforcement system?
>
I don't see how that's relevant to AI which I think is just as capable of
bias as humans are.
>
Fact contraindicates bias.  So, bias -- anywhere -- s a distortion of
"Truth".
Would you want your doctor to give a different type of care to your wife
than to you?  Because of a (hidden?) bias in favor of men (or, against
women)?
if you were that female, how would you regard that bias?
 I may not want it but it's possible it could exist.
It might be the case that I could do nothing about it.
If you believe the literature, there are all sorts of populations
discriminated against in medicine.  Doctors tend to be more aggressive
in treating "male" problems than those of women patients -- apparently
including female doctors.
If you passively interact with your doctor, you end up with that
bias unquestioned in your care.  Thankfully (in our experience),
challenging the doctor has always resulted in them rising to the
occasion, thus improving the care "dispensed".

All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly
coming
into our (US) country.  Or, is that just hyperbole ("illegal" immigrants
tend to commit FEWER crimes)?  Will the audience be biased in its
acceptance/rejection of that "assertion"?
>
Who knows, but whether it's human or AI it will have it's own personality
and its own biases.
>
But we, in assessing "others" strive to identify those biases (unless we
want
to blindly embrace them as "comforting/reinforcing").
>
I visit a friend, daily, who is highly prejudiced, completely opposite
in terms of my political, spiritual, etc. beliefs, hugely different
values, etc.  He is continually critical of my appearance, how I
dress, the hours that I sleep, where I shop, what I spend money on
(and what I *don't*), etc.  And, I just smile and let his comments roll
off me.  SWMBO asks why I spend *any* time with him.
>
"I find it entertaining!" (!!)
 Oh. Now I get why we're having this discussion.
I am always looking for opportunities to learn.  How can you be so critical
of ALL these things (not just myself but EVERYONE around him including
all of the folks he *hires*!) and still remain in this "situation"?
You can afford to move anywhere (this isn't even your "home") so why
stay here with these people -- and providers -- that you (appear to)
dislike?  If you go to a restaurant and are served a bad meal, do you
just eat it and grumble under your breath?  Do you RETURN to the
restaurant for "more punishment"?
Explain to me WHY you engage in such behavior.  I visit a restaurant and
am unhappy with the meal, I bring it to the waiter's/maitre d's attention.
If I have a similar problem a second time, I just avoid the restaurant
entirely -- and see to it that I share this "recommendation" with my
friends.  There are too many other choices to "settle" for a disappointing
experience!
Annoyed with all the "illegals" coming across the border?  Then why
wouldn't you "hire white people"?  Or, at least, verify the latino's
working papers (or, hire through an agency that does this, instead of
a guy operating out of his second-hand pickup truck)!  If we closed
the border as you seem to advocate, what will you THEN do to get
cheap labor?  I.e., how do you rationalize these discrepancies in your
own mind?  (Really!  I wold like to understand how such conflicting goals
can coexist FORCEFULLY in their minds!)

By contrast, I am NOT the sort who belongs to organizations, churches,
etc. ("group think").  It's much easier to see the characteristics of and
flaws *in* these things (and people) from the outside than to wrap
yourself
in their culture.  If you are sheeple, you likely enjoy having others
do your thinking FOR you...
 I don't enjoy having others do my thinking for me but I'm happy to let them
do so in areas where I have no expertise.
Agreed.  But, I don't hesitate to eek out an education in the process.
Likewise, I don't expect a client to blindly accept my assessment of
a problem or its scope.  I will gladly explain why I have come to the
conclusions that I have.  Perhaps I have mistaken some of HIS requirements
and he can point that out in my explanation!  It is in both of our best
interests for him to understand what he is asking and the associated
"costs" -- else, he won't know how to formulate ideas for future projects
that could avoid some of those costs!
["You don't want to formally specify the scope of the job?  Then we just
proceed merrily along with invoices on the 1st and 15h for as long as it
takes.  THAT'S how much it's gonna cost and how long its gonna take!
Any other questions?"]

Which is why I started this with "One thing which bothers me about AI is
that if it's like us but way more
intelligent than us then..."
>
What's to fear, there?  If *you* have the ultimate authority to make
YOUR decisions, then you can choose to ignore the "recommendations"
of an AI just like you can ignore the recommendations of human
"experts"/professionals.
 Who says we have the ultimate authority to ignore AI if it gets cleverer
that us?
AIs aren't omnipotent.  Someone has to design, build, feed and power them.
Do you think the AI is going to magically grow limbs and start fashioning
weaponry to defend itself?  (Or, go on the *offense*?)
If you want to put people in places of power who are ignorant of these
issues, then isn't it your fault for the outcomes that derive?
People love their inexpensive 85 inch TVs.  Yet gripe that they lost their
jobs to an asian firm.  Or, that steak is now $10/pound?  You like living
past your mid-50's-heart-attack but lament women and "farrinners" in medicine?
If you are offered an AI that eliminates all of your "unwanted contact"
(telephone, SMS, email, etc.) would you not avail yourself of it?
If that AI leaked all of your WANTED contacts to another party
(as disclosed in the EULA), when would you choose to live without
its services?
Do the words "free" and "lunch" mean anything to you?

Now it's looking like I might live long enough to get to type something
like
Prompt: Create a new episode of Blake's Seven.
>
The question is whether or not you will be able to see a GOOD episode.
>
I think AI will learn the difference between a good or not so good
episode
just like humans do.
>
How would it learn?  Would *it* be able to perceive the "goodness" of
the episode?  If so, why produce one that it didn't think was good?
HUMANS release non-good episodes because there is a huge cost to
making it that has already been incurred.  An AI could just scrub the
disk and start over.  What cost, there?
>
Particularly if it gets plenty of feedback from humans about whether or
not
they liked the episode it produced.
>
That assumes people will be the sole REACTIVE judge of completed
episodes.  Part of what makes entertainment entertaining is
the unexpected.  Jokes are funny because someone has noticed a
relationship between two ideas in a way that others have not,
previously.  Stories leave lasting impressions when executed well
*or* when a twist catches viewers offguard.
>
Would an AI create something like Space Balls?  Would it perceive the
humor in the various corny "bits" sprinkled throughout?  How would
YOU explain the humor to it?
 I would expect it to generate humor the same way humans do.
How?  Do you think comics don't appraise their own creations BEFORE
testing them on (select) audiences?  That they don't, first, chuckle
at it, refine it and then sort through those they think have the
most promise?
Do you think an AI could appreciate its own humor *without* feedback
from humans?  Do you think it could experience *pride* in its accomplishments
without external validation?  You're expecting an AI to be truly sentient
and attributing human characteristics to it beyond "intelligence".

The opening sequence to Buckaroo Banzai has the protagonist driving a
"jet car" THROUGH a (solid) mountain, via the 8th dimension.  After
the drag chute deploys and WHILE the car is rolling to a stop, the
driver climbs out through a window.  The camera remains closely
focused on the driver's MASKED face (you have yet to see it unmasked)
while the car continuous to roll away behind him.  WHILE YOUR ATTENTION
IS FOCUSED ON THE ACTOR "REVEALING" HIMSELF, the jet car "diesels"
quietly (because it is now at a distance).  Would the AI appreciate THAT
humor?  It *might* repeat that scene in one of its creations -- but,
only after having SEEN it, elsewhere.  Or, without understanding the
humor and just assuming dieseling to be a common occurrence in ALL
vehicles!
 Same way it might appreciate this:
https://www.youtube.com/watch?v=tYJ5_wqlQPg
 
>
It might then play itself a few million created episodes to refine its
ability to judge good ones.
>
>
 

Date Sujet#  Auteur
17 May 24 * smart people doing stupid things46John Larkin
17 May 24 +* Re: smart people doing stupid things8Martin Rid
17 May 24 i`* Re: smart people doing stupid things7John Larkin
17 May 24 i +* Re: smart people doing stupid things3Joe Gwinn
17 May 24 i i`* Re: smart people doing stupid things2John Larkin
18 May 24 i i `- Re: smart people doing stupid things1Joe Gwinn
17 May 24 i `* Re: smart people doing stupid things3Martin Rid
18 May 24 i  +- Re: smart people doing stupid things1Joe Gwinn
18 May 24 i  `- Re: smart people doing stupid things1Don Y
17 May 24 +* Re: smart people doing stupid things36Edward Rawde
17 May 24 i+- Re: smart people doing stupid things1John Larkin
18 May 24 i`* Re: smart people doing stupid things34Don Y
18 May 24 i +- Re: smart people doing stupid things1Don Y
18 May 24 i `* Re: smart people doing stupid things32Edward Rawde
18 May 24 i  `* Re: smart people doing stupid things31Don Y
18 May 24 i   `* Re: smart people doing stupid things30Edward Rawde
18 May 24 i    +* Re: smart people doing stupid things15Edward Rawde
18 May 24 i    i`* Re: smart people doing stupid things14Don Y
18 May 24 i    i `* Re: smart people doing stupid things13Edward Rawde
18 May 24 i    i  `* Re: smart people doing stupid things12Don Y
19 May 24 i    i   `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i    i    `* Re: smart people doing stupid things10Don Y
19 May 24 i    i     `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i    i      `* Re: smart people doing stupid things8Don Y
19 May 24 i    i       `* Re: smart people doing stupid things7Edward Rawde
19 May 24 i    i        `* Re: smart people doing stupid things6Don Y
19 May 24 i    i         `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i    i          `* Re: smart people doing stupid things4Don Y
20 May 24 i    i           `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i    i            `* Re: smart people doing stupid things2Don Y
20 May 24 i    i             `- Re: smart people doing stupid things1Edward Rawde
18 May 24 i    `* Re: smart people doing stupid things14Don Y
18 May 24 i     `* Re: smart people doing stupid things13Edward Rawde
19 May 24 i      `* Re: smart people doing stupid things12Don Y
19 May 24 i       `* Re: smart people doing stupid things11Edward Rawde
19 May 24 i        `* Re: smart people doing stupid things10Don Y
19 May 24 i         `* Re: smart people doing stupid things9Edward Rawde
19 May 24 i          `* Re: smart people doing stupid things8Don Y
19 May 24 i           `* Re: smart people doing stupid things7Edward Rawde
20 May 24 i            `* Re: smart people doing stupid things6Don Y
20 May 24 i             `* Re: smart people doing stupid things5Edward Rawde
20 May 24 i              `* Re: smart people doing stupid things4Don Y
20 May 24 i               `* Re: smart people doing stupid things3Edward Rawde
20 May 24 i                `* Re: smart people doing stupid things2Don Y
20 May 24 i                 `- Re: smart people doing stupid things1Edward Rawde
20 May 24 `- Re: smart people doing stupid things1Bill Sloman

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal