Sujet : Re: [OT] Why governments must limit AI violations of copyright
De : atropos (at) *nospam* mac.com (BTR1701)
Groupes : rec.arts.tvDate : 27. May 2025, 19:17:37
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <1014vk0$2oc8d$1@dont-email.me>
References : 1
User-Agent : Usenapp/0.92.2/l for MacOS
On May 27, 2025 at 9:06:34 AM PDT, "Rhino" <
no_offline_contact@example.com>
wrote:
Mary Spender presents a relatively brief but, I think, compelling
argument for why governments need to reject the tech firms claims that
using existing works to train AIs is fair use and does not need to be
paid for.
https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
The tech bros are wallowing in almost unimagineable wealth: they can
definitely afford to compensate copyright holders for using their work
as training data. Alternatively, they can let copyright holders exclude
their works from use in training data and compensate them for what they
have used without permission.
I don't believe the tech companies have some kind of natural right to
generate new works that are closely modelled on existing works without
paying for their use of those works.
If you can show that the AI produces a copy of the work it was trained on, or
one substantially similar enough as to be confusing to the reasonable man,
then yes, I agree.
E.g., if you ask it to generate a story about a young girl who finds herself
lost in a fantasy world and it spits out the plot to Alice in Wonderland.
But if you ask it that same question and it produces a totally different story
that isn't Alice in Wonderland in any recognizable way but it learned how to
do that from 'reading' Alice in Wonderland, then I don't see how you have a
copyright violation under existing law or even under the philosophical
framework on which existing law has been built. At that point, it's no
different from a human reading Alice in Wonderland and figuring out how to use
the elements and techniques employed by Carroll in his story to produce a
different story of his own. No one would suggest copyright violation if a
human did it, so how can it suddenly be one if a computer algorithm does it?
The new works generated by humans are already pretty derivative in too many
cases: we don't need AIs
generating still more of the same.
Well therein lies the rub. At least in America. We call it the Bill of Rights,
not the Bill of Needs, for a reason.
There's a wealth of art (whether music, visual art, or literature)
freely available in the public domain. Let them use that if they need
large quantities of art to train their models.