Sujet : Re: A chat with AI on OoL
De : eastside.erik (at) *nospam* gmail.com (erik simpson)
Groupes : talk.originsDate : 14. Dec 2024, 17:48:47
Autres entêtes
Organisation : University of Ediacara
Message-ID : <701d4350-1928-4a9a-b215-80a3d7339db8@gmail.com>
References : 1 2 3 4 5
User-Agent : Mozilla Thunderbird
On 12/14/24 8:25 AM, LDagget wrote:
On Sat, 14 Dec 2024 16:16:28 +0000, DB Cates wrote:
On 2024-12-14 9:41 a.m., LDagget wrote:
On Sat, 14 Dec 2024 12:17:15 +0000, MarkE wrote:
>
Whatever the future is with AI, in this example it was able to provide
responses that are paradigm beyond a concerted googling effort in terms
relevance, conciseness and presentation.
>
>
That sentence is typical of an AI composition in how it
uses words without understanding them. I point in particular
to the odd use of paradigm.
>
An AI will present familiar sounding verbiage, but it's often
nonsense to those who actually understand. And it isn't your
friend because it won't help you to actually understand. I
grant it is better at making people *think* they understand
because it doesn't have as many seemingly awkward bits that
are the part where you are supposed to have to think hard.
>
And now that nonsensical use of 'paradigm' is part of the training data
for many LLMs. Sigh
Now consider how LLMs use Cognitive Dissonance, as opposed to its
meaning as coined. If only LLMs could feel pain.
Programs that could feel pain would a good thing. Then they could fix their own bugs. LLMs could stop talking nonsense.