Sujet : Re: Be very, very afraid...
De : cd999666 (at) *nospam* notformail.com (Cursitor Doom)
Groupes : sci.electronics.designDate : 27. Oct 2024, 09:34:49
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vfktv9$7486$1@dont-email.me>
References : 1 2
User-Agent : Pan/0.149 (Bellevue; 4c157ba)
On Sun, 27 Oct 2024 05:53:10 GMT, Jan Panteltje wrote:
On a sunny day (Sat, 26 Oct 2024 17:43:26 -0700) it happened Don Y
<blockedofcourse@foo.invalid> wrote in <vfk2bp$3ueps$1@dont-email.me>:
No, not that AI is going to take your job ("render you redundant") but,
rather, that it is going to be relied upon for information that is
provably incorrect:
>
<https://apnews.com/article/ai-artificial-intelligence-health-
business-90020cdf5fa16c79ca2e5b6c4c9bbb14>
>
it seems the most expeditious way to get these sorts of problems fixed
is NOT to regulate how it can be used but, rather, to hold firms using
it financially (criminally?) accountable for its follies...
Sure, like doctors tha'st make mistakes:
https://qualitysafety.bmj.com/content/33/2/109
Does them using AI reduce errors?
If so they could be accountable for not using it...
Like a second opinion, always required?
It is complicated..
hacked;-)
Power outage...
etc etc
Anyone concerned about medical confidentiality should *never* consent to
having their records held in digital form anyway. Databases holding such
info are prime targets for cyberattacks, as the UK's NHS is quickly
finding out.