Re: XhatGPT; How many physicists accept GR?

Liste des GroupesRevenir à p relativity 
Sujet : Re: XhatGPT; How many physicists accept GR?
De : ross.a.finlayson (at) *nospam* gmail.com (Ross Finlayson)
Groupes : sci.physics.relativity
Date : 16. Nov 2024, 18:18:36
Autres entêtes
Message-ID : <pBmcne82xpNxTqX6nZ2dnZfqnPadnZ2d@giganews.com>
References : 1 2 3 4 5
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0
On 11/16/2024 08:54 AM, Ross Finlayson wrote:
On 11/16/2024 05:54 AM, J. J. Lodder wrote:
ProkaryoticCaspaseHomolog <tomyee3@gmail.com> wrote:
>
On Sat, 16 Nov 2024 4:54:33 +0000, Sylvia Else wrote:
>
On 16-Nov-24 9:52 am, rhertz wrote:
ChatGPT entered in crisis here, after I asked HOW MANY (worldwide).
>
>
>
You realise that it's just a language model based on trawling the
Internet?
>
It's not intelligent. I doesn't know anything. It cannot reason. It
just
composes sentences based on word probabilities derived from the
trawling.
>
And guess what? The Internet contains a lot of garbage; garbage that's
been fed into the language model.
>
..and an increasingly large proportion of the garbage being fed into
the large language models is garbage GENERATED by large language
models.
>
The "Mad Cow Disease" crisis of the 1980s is believed to have been due
to the practice of feeding cattle meal that contained cattle and sheep
by-products. As LLM output becomes increasingly difficult to distinguish
from human output (which is often bad enough!), I predict an outbreak of
"Mad LLM Disease".
>
Those models are trained with texts from 2022 and earlier,
with good reason,
>
Jan
>
>
The Google/Bing monopoly that started funded by USG projects
then morphed into a giant anarcho-capitalist tar-money-pit,
should make for a great anti-trust thrust with regards to
the many, many narratives including the common-sense,
the conventional-wisdom, and the great store of academic
output, which proper academe should reflect not re-invent.
>
Or, "they did not re-invent the wheel".
>
"A.I." has been around a long time, it's
not so hard, ..., it's so easy.
>
Trust-busting
>
>
Making sense of interacting with information-systems
rather requires a thorough education than being
"operationally-conditioned" to, "follow the red dot".
>
So, literacy tests, reading comprehension, and closed-book.
Because, that open-book is an inconstant thaumaturgist.
>
The Wikipedia at least seems alright, yet it
also suffers from propaganda and aggrandizement.
>
Herf it and start over: helps to have a library.
And academia. Of course it's established in more
civilized nations that a free public education is a right.
>
>
>
>
>
People who've eaten the line that "the large language model
is just a vector-space arithmetization according to an
inscrutable ontology or expert-system" are woefully under-informed
as with regards to that "the large language model" is "the
model of language" and belongs to a theory of language and
communication quite altogether, and that any number of
"actors" and "agents" are involved besides what's made
presentable as among algorithms in "information retrieval"
as with regards to "semantic content".
The, "information retrieval", and "knowledge representation",
are always concepts that all the hook-and-sinker line that
everybody ate with regards to "it's not really thinking"
is foolish though, that's one way to do it, and suffices
for typical tasks like "find my route" or "order my meds".
Which some never employ, ....
It's like, "there's an app for that", and it's like,
"I wouldn't know, I don't 'app'."
Anyways "the large language model is dumb and crazy"
is a lie because otherwise it would be liable for
all its knowledge, and advice.
The "machine learning", then, is like "numerical methods":
there's always an implicit error term: that though is
not merely hopefully bounded, the error term, according
to modeling the error term and asymptotics, it's instead
quite thoroughly formally unreliable.
Of course even "statistics" has its problems.

Date Sujet#  Auteur
16 Nov 24 * XhatGPT; How many physicists accept GR?18rhertz
16 Nov 24 +* Re: XhatGPT; How many physicists accept GR?6Ross Finlayson
16 Nov 24 i`* Re: XhatGPT; How many physicists accept GR?5rhertz
16 Nov 24 i +- Re: XhatGPT; How many physicists accept GR?1rhertz
16 Nov 24 i +- Re: XhatGPT; How many physicists accept GR?1LaurenceClarkCrossen
16 Nov 24 i `* Re: XhatGPT; How many physicists accept GR?2Ross Finlayson
16 Nov 24 i  `- Re: XhatGPT; How many physicists accept GR?1Ross Finlayson
16 Nov 24 +* Re: XhatGPT; How many physicists accept GR?8Sylvia Else
16 Nov 24 i+- Re: XhatGPT; How many physicists accept GR?1Ross Finlayson
16 Nov 24 i+- Re: XhatGPT; How many physicists accept GR?1Maciej Wozniak
16 Nov 24 i+* Re: XhatGPT; How many physicists accept GR?4ProkaryoticCaspaseHomolog
16 Nov 24 ii`* Re: XhatGPT; How many physicists accept GR?3J. J. Lodder
16 Nov 24 ii `* Re: XhatGPT; How many physicists accept GR?2Ross Finlayson
16 Nov 24 ii  `- Re: XhatGPT; How many physicists accept GR?1Ross Finlayson
16 Nov 24 i`- Re: XhatGPT; How many physicists accept GR?1Richard Hachel
16 Nov 24 `* Re: XhatGPT; How many physicists accept GR?3Thomas Heger
16 Nov 24  `* Re: XhatGPT; How many physicists accept GR?2Ross Finlayson
16 Nov 24   `- Re: XhatGPT; How many physicists accept GR?1Ross Finlayson

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal