Sujet : potemkin understanding
De : here (at) *nospam* is.invalid (JAB)
Groupes : misc.news.internet.discussDate : 04. Jul 2025, 00:51:46
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <104752l$e8qf$1@dont-email.me>
User-Agent : ForteAgent/8.00.32.1272
AI models just don't understand what they're talking about
Researchers find models' success at tests hides illusion of
understanding
Researchers from MIT, Harvard, and the University of Chicago have
proposed the term "potemkin understanding" to describe a newly
identified failure mode in large language models that ace conceptual
benchmarks but lack the true grasp needed to apply those concepts in
practice.
It comes from accounts of fake villages - Potemkin villages -
constructed at the behest of Russian military leader Grigory Potemkin
to impress Empress Catherine II.
The academics are differentiating "potemkins" from "hallucination,"
which is used to describe AI model errors or mispredictions. In fact,
there's more to AI incompetence than factual mistakes; AI models lack
the ability to understand concepts the way people do, a tendency
suggested by the widely used disparaging epithet for large language
models, "stochastic parrots."
Computer scientists Marina Mancoridis, Bec Weeks, Keyon Vafa, and
Sendhil Mullainathan suggest the term "potemkin understanding" to
describe when a model succeeds at a benchmark test without
understanding the associated concepts.
https://www.theregister.com/2025/07/03/ai_models_potemkin_understanding/AI models just don't understand what they're talking about
Same with current Congressional Republicans....T-Borg's stochastic
parrots