Liste des Groupes | Revenir à e design |
That doesn't mean that YOU have to abide by it. Or, even thatBut it is still the case that the decision has already been made.But then the decision has already been made so why bother with such an>
examination?
So you can update your assessment of the party's decision making
capabilities/strategies.
Wouldn't you expect the training for AIs to similarly varyWhen a child is "learning", the parent is continually refining theThe quality of parenting varies a lot.
"knowledge" the child is accumulating; correcting faulty
"conclusions" that the child may have gleaned from its examination
of the "facts" it encounters.
Because it was taught how to learn from its own actions.In that case how did AlphaZero manage to teach itself to play chess by>So, you can>
opt to endorse their decision or reject it -- regardless of
THEIR opinion on the subject.
>
E.g., if a manager makes stupid decisions regarding product
design, you can decide if you want to deal with the
inevitable (?) outcome from those decisions -- or "move on".
You aren't bound by his decision making process.
>
With AIs making societal-scale decisions (directly or
indirectly), you get caught up in the side-effects of those.
Certainly AI decisions will depend on their training, just as human
decisions do.
But human learning happens over years and often in a supervised context.
AIs "learn" so fast that only another AI would be productive at
refining its training.
playing against itself?
Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.?I'm not sure I get why it's so essential to have AI explain its reasons.And you can still decide whether to be bound by that decision.>
Unless, of course, the AI has got itself into a position where it will
see
you do it anyway by persuasion, coercion, or force.
Consider the mammogram example. The AI is telling you that this
sample indicates the presence -- or likelihood -- of cancer.
You have a decision to make... an ACTIVE choice: do you accept
its Dx or reject it? Each choice comes with a risk/cost.
If you ignore the recommendation, injury (death?) can result from
your "inaction" on the recommendation. If you take some remedial
action, injury (in the form of unnecessary procedures/surgery)
can result.
>
Because the AI can't *explain* its "reasoning" to you, you have no way
of updating your assessment of its (likely) correctness -- esp in
THIS instance.
If I need some plumbing done I don't expect the plumber to give detailedIf you suspect that he may not be competent -- or may be motivated by
reasons why a specific type of pipe was chosen. I just want it done.
If I want to play chess with a computer I don't expect it to give detailedThen you don't expect to LEARN from the chess program.
reasons why it made each move. I just expect it to win if it's set to much
above beginner level.
A human chess player may be able to give detailed reasons for making aIf the human was expected to TEACH then those explanations would be
specific move but would not usually be aske to do this.
That's not universally true. If it was, then all decisions wouldSame with humans.Just like humans do.>
Human treatment of other animals tends not to be of the best, except in a
minority of cases.
How do we know that AI will treat us in a way we consider to be
reasonable?
The AI doesn't care about you, one way or the other. Any "bias" in
its conclusions has been baked in from the training data/process.
Exactly. But, you can pose questions of them and otherwise observe theirDo you know what that data was? Can you assess its bias? Do the folksHumans have the same issue. You can't see into another person's brain to see
who *compiled* the training data know? Can they "tease" the bias out
of the data -- or, are they oblivious to its presence?
what bias they may have.
Fact contraindicates bias. So, bias -- anywhere -- s a distortion of "Truth".Lots of blacks in prison. Does that "fact" mean that blacks areI don't see how that's relevant to AI which I think is just as capable of
more criminally inclined? Or, that they are less skilled at evading
the consequences of their crimes? Or, that there is a bias in the
legal/enforcement system?
bias as humans are.
But we, in assessing "others" strive to identify those biases (unless we wantAll sorts of "criminals" ("rapists", "drug dealers", etc) allegedly comingWho knows, but whether it's human or AI it will have it's own personality
into our (US) country. Or, is that just hyperbole ("illegal" immigrants
tend to commit FEWER crimes)? Will the audience be biased in its
acceptance/rejection of that "assertion"?
and its own biases.
What's to fear, there? If *you* have the ultimate authority to makeAnd that's not even beginning to address other aspects of theWhich is why I started this with "One thing which bothers me about AI is
"presentation" (e.g., turn left girls).
>
Real estate agents would likely be the next to go; much of their
jobs being trivial "hosting" and "transport". Real estate *law*
is easily codified into an AI to ensure buyers/sellers get
correct service. An AI could also evaluate (and critique)
the "presentation" of the property. "Carry me IN your phone..."
that if it's like us but way more
intelligent than us then..."
How would it learn? Would *it* be able to perceive the "goodness" ofI think AI will learn the difference between a good or not so good episodeNow it's looking like I might live long enough to get to type something>
like
Prompt: Create a new episode of Blake's Seven.
The question is whether or not you will be able to see a GOOD episode.
just like humans do.
Particularly if it gets plenty of feedback from humans about whether or notThat assumes people will be the sole REACTIVE judge of completed
they liked the episode it produced.
It might then play itself a few million created episodes to refine its
ability to judge good ones.
Les messages affichés proviennent d'usenet.