Sujet : Re: smart people doing stupid things
De : invalid (at) *nospam* invalid.invalid (Edward Rawde)
Groupes : sci.electronics.designDate : 18. May 2024, 16:47:27
Autres entêtes
Organisation : BWH Usenet Archive (https://usenet.blueworldhosting.com)
Message-ID : <v2af21$14mr$1@nnrp.usenet.blueworldhosting.com>
References : 1 2 3 4 5 6 7
User-Agent : Microsoft Outlook Express 6.00.2900.5931
"Don Y" <
blockedofcourse@foo.invalid> wrote in message
news:v29fi8$2l9d8$1@dont-email.me...On 5/17/2024 9:46 PM, Edward Rawde wrote:
Where it will be in 10 years is impossible to predict.
>
I agree.
>
So, you can be optimistic (and risk disappointment) or
pessimistic (and risk being pleasantly surprised).
Unfortunately, the consequences aren't as trivial as
choosing between the steak or lobster...
>
But, as the genie is
out of the bottle, there is nothing to stop others from using/abusing it
in ways that we might not consider palatable! (Do you really think an
adversary will follow YOUR rules for its use -- if they see a way to
achieve gains?)
>
The risk from AI is that it makes decisions without being able to
articulate
a "reason" in a verifiable form.
>
I know/have known plenty of people who can do that.
>
But *you* can evaluate the "goodness" (correctness?) of their
decisions by an examination of their reasoning.
But then the decision has already been made so why bother with such an
examination?
So, you can
opt to endorse their decision or reject it -- regardless of
THEIR opinion on the subject.
>
E.g., if a manager makes stupid decisions regarding product
design, you can decide if you want to deal with the
inevitable (?) outcome from those decisions -- or "move on".
You aren't bound by his decision making process.
>
With AIs making societal-scale decisions (directly or
indirectly), you get caught up in the side-effects of those.
Certainly AI decisions will depend on their training, just as human
decisions do.
And you can still decide whether to be bound by that decision.
Unless, of course, the AI has got itself into a position where it will see
you do it anyway by persuasion, coercion, or force.
Just like humans do.
Human treatment of other animals tends not to be of the best, except in a
minority of cases.
How do we know that AI will treat us in a way we consider to be reasonable?
Human managers often don't. Sure you can make a decision to leave that job
but it's not an option for many people.
Actors had better watch out if this page is anything to go by:
https://openai.com/index/sora/I remember a discussion with a colleague many decades ago about where
computers were going in the future.
My view was that at some future time, human actors would no longer be
needed.
His view was that he didn't think that would ever be possible.
Now it's looking like I might live long enough to get to type something like
Prompt: Create a new episode of Blake's Seven.
>