Sujet : Re: A chat with AI on OoL
De : martinharran (at) *nospam* gmail.com (Martin Harran)
Groupes : talk.originsDate : 13. Dec 2024, 14:11:34
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <gkunljh49r1bapc9u59o23d93qjo6vk0u2@4ax.com>
References : 1 2 3 4 5 6
User-Agent : ForteAgent/8.00.32.1272
On Fri, 13 Dec 2024 05:25:36 +0000,
j.nobel.daggett@gmail.com(LDagget) wrote:
On Thu, 12 Dec 2024 21:48:17 +0000, Martin Harran wrote:
>
On Thu, 12 Dec 2024 18:39:25 +0000, j.nobel.daggett@gmail.com
(LDagget) wrote:
>
On Thu, 12 Dec 2024 14:21:46 +0000, Martin Harran wrote:
>
On Wed, 11 Dec 2024 08:29:08 -0800, erik simpson
>
>
ChatGPT produces no information. It rephrases what it's been told,
which sometimes makes it clearer. In particular, the chirality problem
isn't a problem at all.
>
That's an example of what I have just posted about in another thread.
No matter what the scientists say, AI has decided that chirality *is*
a problem so expect to see more and more output based on that
assumption.
>
Echoing Erik, please don't use the term "assumption" respective
to AI. Regurgitation would be more appropriate.
>
A difference that makes no difference in this context. The problem is
not you or Erik who know lots about OOL or me who knows a little bit
about it; it's people who know nothing about it and will accept an AI
statement of assumption as true, or people like MarkE who desperately
want such an assumption to be true,
>
That is of course the problem, and it's a problem because too few
understand the nature of 99% of the AI systems out there. They don't
make assumptions. Developers press an assumption that some averaged
composition contains intelligence. That assumption is dubious.
>
In contrast, the term __regurgitation__ was (forgive me) intelligently
chosen. It's both accurate and provocative. It carries intent to shock
people to object to it.
The problem I see with the 'regurgitation' label is that it's true
about the AI production of opinion pieces but not about all aspects of
AI. There are many aspects of AI which can potentially make
considerable improvement to human life. Medicine is one obvious are
where, for example, AI can be used to analyse MRI and other scans far
faster n and more accurately than the human eye, spotting early
problems that are not even visible to the human eye. An attack on the
negative aspects or misuse of AI therefore have to be carefully
targeted.
Those who have an emerging understanding are
given reason to think. Those who lack an understanding can object but
that opens the door to a dialog which might help enlighten them.
I'm not too sure about that. Looking back on how science has struggled
to eliminate ToE is "just a theory' and responsible for bad things
like eugenics and even the Nazis, I'm not sure that many ( most?)
people are open to being enlightened. The problem is even harder
nowadays with social media giving voice to any crackpot who talks loud
enough. I doubt that Trump would have been elected president 30 years
ago or that Elon Musk would have the power he has now.
>
And that goes to the base problem of people misconstruing AI as having
anything like an understanding of what it composes. It's worth fighting
this fight, frequently and often. Who knows, AIs might pick up on the
volume and start regurgitating that AI isn't intelligent at all but just
averages the crap it's fed.
I hadn't thought about that last aspect. Evolution has an element of
self-regulation which can get rid of bad stuff, so maybe AI will sort
out the problem itself. I wouldn't like to depend on that, however,
science and those on the side of good sense have to up their game