AI is Dehumanizing Technology

Liste des GroupesRevenir à c misc 
Sujet : AI is Dehumanizing Technology
De : bencollver (at) *nospam* tilde.pink (Ben Collver)
Groupes : comp.misc
Date : 31. May 2025, 14:07:20
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <101euu8$1519c$1@dont-email.me>
User-Agent : slrn/1.0.3 (Linux)
AI is Dehumanization Technology
===============================

Vintage black & white illustration of a craniometer measuring
someone's skull.

<https://thedabbler.patatas.ca/images/ai-dehumanization/
craniometer2.jpg>

AI systems are an attack on workers, climate goals, our information
environment, and civil liberties. Rather than enhancing our human
qualities, these systems degrade our social relations, and undermine
our capacity for empathy and care. The push to adopt AI is, at its
core, a political project of dehumanization, and serious
consideration should be given to the idea of rejecting the deployment
of these systems entirely, especially within Canada's public sector.

* * *

At the end of February, Elon Musk--whose xAI data centre is being
powered by nearly three dozen on-site gas turbines that are poisoning
the air of nearby majority-Black neighborhoods in Memphis--went on
the Joe Rogan podcast and declared that "the fundamental weakness of
western civilization is empathy", describing "the empathy response"
as a "bug" or "exploit" in our collective programming.

<https://www.theguardian.com/technology/2025/apr/24/
elon-musk-xai-memphis>

<https://www.theguardian.com/us-news/ng-interactive/2025/apr/08/
empathy-sin-christian-right-musk-trump>

This is part of a broader movement among Silicon Valley tech
oligarchs and a billionaire-aligned political elite to advance a
disturbing notion: that by abandoning our deeply-held values of
justice, fairness, and duty toward one another--in short, by
abandoning our humanity--we are in fact promoting humanity's
advancement. It's clearly absurd, but if you're someone whose wealth
and power are predicated on causing widespread harm, it's probably
easier to sleep at night if you can tell yourself that you're serving
a higher purpose.

And, well, their AI systems and infrastructure cause an awful lot of
harm.

To get to the root of why AI systems are so socially corrosive, it
helps to first step back a bit and look at how they work. Physicist
and critical AI researcher Dan McQuillan has described AI as
'pattern-finding' tech. For example, to create an LLM such as
ChatGPT, you'd start with an enormous quantity of text, then do a lot
of computationally-intense statistical analysis to map out which
words and phrases are most likely to appear near to one another.
Crunch the numbers long enough, and you end up with something similar
to the next-word prediction tool in your phone's text messaging app,
except that this tool can generate whole paragraphs of mostly
plausible-sounding word salad.

<https://www.danmcquillan.org/>

What's important to note here is that the machine's outputs are
solely based on patterns of statistical correlation. The AI doesn't
have an understanding of context, meaning, or causation. The system
doesn't 'think' or 'know', it just mimics the appearance of human
communication. That's all. Maybe the output is true, or maybe it's
false; either way the system is behaving as designed.

Automating bias
===============

When an AI confidently recommends eating a deadly-poisonous mushroom,
or summarizes text in a way that distorts its meaning--perhaps a
research paper, or maybe one day an asylum claim--the consequences
can range from bad to devastating. But the problems run deeper still:
AI systems can't help but reflect the power structures, hierarchies,
and biases present in their training data. A 2024 Stanford study
found that the AI tools being deployed in elementary schools
displayed a "shocking" degree of bias; one of the LLMs, for example,
routinely created stories in which students with names like Jamal and
Carlos would struggle with their homework, but were "saved" by a
student named Sarah.

<https://hai.stanford.edu/news/how-harmful-are-ais-biases-on-diverse-
student-populations>

As alarming as that is, at least those tools exhibit obvious bias.
Other times it might not be so easy to tell. For instance, what
happens when a system like this isn't writing a story, but is being
asked a simple yes/no question about whether or not an organization
should offer Jamal, Carlos, or Sarah a job interview? What happens to
people's monthly premiums when a US health insurance company's AI
finds a correlation between high asthma rates and home addresses in a
certain Memphis zip code? In the tradition of skull-measuring
eugenicists, AI provides a way to naturalize and reinforce existing
social hierarchies, and automates their reproduction.

<https://racismandtechnology.center/2024/04/01/
openais-gpt-sorts-resumes-with-a-racial-bias/>

This is incredibly dangerous, particularly when it comes to embedding
AI inside the public sector. Human administrators and decision-makers
will invariably have biases and prejudices of their own, of
course--but there are some important things to note about this. For
one thing, a diverse team can approach decisions from multiple
angles, helping to mitigate the effects of individual bias. An AI
system, insofar as we can even say it 'approaches' a problem, does so
from a single, culturally flattened and hegemonic perspective.
Besides, biased human beings, unlike biased computers, are aware that
we can be held accountable for our decisions, whether via formal
legal means, professional standards bodies, or social pressure.

Algorithmic systems can't feel those societal constraints, because
they don't think or feel anything at all. But the AI industry
continues to tell us that at some point, somehow, they will solve the
so-called 'AI alignment problem', at which point we can trust their
tools to make ethical, unbiased decisions. Whether it's even possible
to solve this problem is still very much an open debate among
experts, however.

<https://www.aibiasconsensus.org/>

Possible or not, we're told that in the meantime, we should always
have human beings double-checking their systems' outputs. That might
sound like a good solution, but in reality it opens a whole new can
of worms. For one thing, there's the phenomenon of 'automation
bias'--the tendency to rely on an automated system's result more than
one's own judgement--something that affects people of all levels of
skill and experience, and undercuts the notion that error and bias
can be reliably addressed by having a 'human in the loop'.

<https://pubs.rsna.org/doi/10.1148/radiol.222176>

* * *

Then there's the deskilling effect. Despite AI being touted as a way
to 'boost productivity', researchers are consistently finding that
these tools don't result in productivity gains. So why do people in
positions of power continue to push for AI adoption? The logical
answer is that they want an excuse to fire workers, and don't care
about the quality of work being done.

<https://www.project-syndicate.org/commentary/ai-productivity-boom-
forecasts-countered-by-theory-and-data-by-daron-acemoglu-2024-05>

This attack on labour becomes a self-reinforcing cycle. With a
smaller team, workers get overloaded, and increasingly need to rely
on whatever tools are at their disposal, even as those tools devalue
their skills and expertise. This drives down wages, reduces
bargaining power, and opens the door for further job cuts--and likely
for privatization.

<https://www.404media.co/microsoft-study-finds-ai-makes-human-
cognition-atrophied-and-unprepared-3/>

Worse still, it seems that the Canadian federal government is
actively pursuing policy that could reinforce this abusive dynamic
further; the 2024 Fall Economic Statement included a proposal that
would, using public money, incentivize our public pension funds to
invest in AI data centres to the tune of tens of billions of
dollars.

<https://budget.canada.ca/update-miseajour/2024/report-rapport/
chap2-en.html#catalyzing-ai-infrastructure>

Suffocating the soul of the public service
==========================================

I'd happily wager that when people choose careers in the public
sector, they rarely do so out of narrow self-interest. Rather, they
choose this work because they're mission-oriented: they want the
opportunity to express care through their work by making a positive
difference in people's lives. Often the job will entail making
difficult decisions. But that's par for the course: a decision isn't
difficult if the person making it doesn't care about doing the right
thing.

And here's where we start to get to the core of it all: human
intelligence, whatever it is, definitely isn't reducible to just
logic and abstract reasoning; feeling is a part of thinking too. The
difficulty of a decision isn't merely a function of the number of
data points involved in a calculation, it's also about understanding,
through lived experience, how that decision will affect the people
involved materially, psychologically, emotionally, socially. Feeling
inner conflict or cognitive dissonance is a good thing, because it
alerts us to an opportunity: it's in these moments that we're able to
learn and grow, by working through an issue to find a resolution that
expresses our desire to do good in the world.

AI, along with the productivity logic of those pushing its adoption,
short-circuits that reflective process before it can even begin, by
providing answers at the push of a button or entry of a prompt. It
turns social relations into number-crunching operations, striking a
technocratic death blow to the heart of what it means to have a
public sector in the first place.

* * *

The dehumanizing effects of AI don't end there, however. Meredith
Whittaker, president of the Signal Foundation, has described AI as
being fundamentally "surveillance technology". This rings true here
in many ways. First off, the whole logic of using AI in government is
to render members of the public as mere collections of measurements
and data points. Meanwhile, AI also acts as a digital intermediary
between public sector workers and service recipients (or even between
public employees, whenever they generate an email or summarize a
meeting using AI), an intermediary that's at least capable of keeping
records of each interaction, if not influencing or directing it.

<https://techcrunch.com/2023/09/25/signals-meredith-whittaker-ai-is-
fundamentally-a-surveillance-technology/>

This doesn't inescapably lead to a technological totalitarianism. But
adopting these systems clearly hands a lot of power to whoever
builds, controls, and maintains them. For the most part, that means
handing power to a handful of tech oligarchs. To at least some
degree, this represents a seizure of the 'means of production' from
public sector workers, as well as a reduction in democratic oversight.

Lastly, it may come as no surprise that so far, AI systems have found
their best product-market fit in police and military applications,
where short-circuiting people's critical thinking and decision-making
processes is incredibly useful, at least for those who want to turn
people into unhesitatingly brutal and lethal instruments of authority.

<https://www.washingtonpost.com/business/interactive/2025/
police-artificial-intelligence-facial-recognition/>

<https://www.businessinsider.com/us-special-forces-using-lot-of-ai-
for-cognitive-load-2025-5>

* * *

AI systems reproduce bias, cheapen and homogenize our social
interactions, deskill us, make our jobs more precarious, eliminate
opportunities to practice care, and enable authoritarian modes of
surveillance and control. Deployed in the public sector, they
undercut workers' ability to meaningfully grapple with problems and
make ethical decisions that move our society forward. These
technologies dehumanize all of us. Collectively, we can choose to
reject them.

From: <https://thedabbler.patatas.ca/pages/
ai-is-dehumanization-technology.html>

Date Sujet#  Auteur
31 May 25 * AI is Dehumanizing Technology5Ben Collver
31 May 25 +- Re: AI is Dehumanizing Technology1D
1 Jun21:08 `* Re: AI is Dehumanizing Technology3Stefan Ram
2 Jun14:59  `* Re: AI is Dehumanizing Technology2Ben Collver
5 Jun16:46   `- Re: AI is Dehumanizing Technology1Ethan Carter

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal