Re: [OT] Why governments must limit AI violations of copyright

Liste des GroupesRevenir à ra tv 
Sujet : Re: [OT] Why governments must limit AI violations of copyright
De : atropos (at) *nospam* mac.com (BTR1701)
Groupes : rec.arts.tv
Date : 27. May 2025, 23:31:17
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <1015efl$2ro50$1@dont-email.me>
References : 1 2 3 4
User-Agent : Usenapp/0.92.2/l for MacOS
On May 27, 2025 at 3:18:45 PM PDT, "moviePig" <nobody@nowhere.com> wrote:

On 5/27/2025 5:43 PM, BTR1701 wrote:
 On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
 wrote:
 
 On 5/27/2025 3:20 PM, Rhino wrote:
 On 2025-05-27 2:17 PM, BTR1701 wrote:
 On May 27, 2025 at 9:06:34 AM PDT, "Rhino"
 <no_offline_contact@example.com> wrote:
 
 Mary Spender presents a relatively brief but, I think,
 compelling argument for why governments need to reject the
 tech firms claims that using existing works to train AIs is
 fair use and does not need to be paid for.
 
 https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
 
 The tech bros are wallowing in almost unimagineable wealth:
 they can definitely afford to compensate copyright holders
 for using their work as training data. Alternatively, they
 can let copyright holders exclude their works from use in
 training data and compensate them for what they have used
 without permission.
 
 I don't believe the tech companies have some kind of natural
 right to generate new works that are closely modelled on
 existing works without paying for their use of those works.
 
 If you can show that the AI produces a copy of the work it was
 trained on, or one substantially similar enough as to be
 confusing to the reasonable man, then yes, I agree.
 
 E.g., if you ask it to generate a story about a young girl who
 finds herself lost in a fantasy world and it spits out the
 plot to Alice in Wonderland.
 
 But if you ask it that same question and it produces a totally
 different story that isn't Alice in Wonderland in any
 recognizable way but it learned how to do that from 'reading'
 Alice in Wonderland, then I don't see how you have a copyright
 violation under existing law or even under the philosophical
 framework on which existing law has been built. At that point,
 it's no different from a human reading Alice in Wonderland and
 figuring out how to use the elements and techniques employed
 by Carroll in his story to produce a different story of his
 own. No one would suggest copyright violation if a human did
 it, so how can it suddenly be one if a computer algorithm does
 it?
 
 The new works generated by humans are already pretty
 derivative in too many cases: we don't need AIs generating
 still more of the same.
 
 Well therein lies the rub. At least in America. We call it the
 Bill of Rights, not the Bill of Needs, for a reason.
 
 There's a wealth of art (whether music, visual art, or
 literature) freely available in the public domain. Let them
 use that if they need large quantities of art to train their
 models.
 
 
 
 Your points are well taken. Yes, if the AI-generated material
 isn't recognizable to someone familiar with Alice in Wonderland,
 it's hard to make a case for copyright infringement. And yes,
 even if *I* don't see a need for yet more derivative works, it's
 not illegal, even if it is annoying.
 
 The challenge is going to come with deciding if an AI-generated
 work is "too similar" to something it trained on. I expect that
 similarity, like beauty, is in the eye (or ear) of the beholder.
 Maybe a committee will have to do the deciding and only if a
 majority of its members thinks the similarity is too close will
 the AI work be labelled a copyright infringement. Of course
 selection of this committee will be challenging since the tech
 companies are going to favour people that don't ever see
 similarities even of identical things and the human creators
 will tend to see similarity in everything because its in their
 financial interest to find similarity.
 
 Two ancillary thoughts:  Afaics, we're already within reach of
 such a pilfering AI-agent that can be dialed to a desired degree
 of "distance" from the original work it's copying.  Meanwhile,
 whenever a claim of infringement is brought, adjudicating that
 "distance" sounds like a proper and plausible task for a
 magistrate that is itself an AI.
 
 We're at that point with humans, too, and long have been.
 
An answer might lie in my second thought (restored above).  An AI that
could detect similarity between a work and its alleged copy might be
sufficient proof of infringement.  Even though it'd almost certainly be
somewhat imprecise, that shouldn't concern any truly original author.

Again, you'd have to come up with a coherent legally acceptable reason why de
minimis similarity with AI would constitute violation but the same similarity
in a human-produced work would not.



Date Sujet#  Auteur
27 May 25 * [OT] Why governments must limit AI violations of copyright9Rhino
27 May 25 `* Re: [OT] Why governments must limit AI violations of copyright8BTR1701
27 May 25  `* Re: [OT] Why governments must limit AI violations of copyright7Rhino
27 May 25   `* Re: [OT] Why governments must limit AI violations of copyright6moviePig
27 May 25    `* Re: [OT] Why governments must limit AI violations of copyright5BTR1701
27 May 25     `* Re: [OT] Why governments must limit AI violations of copyright4moviePig
27 May 25      +* Re: [OT] Why governments must limit AI violations of copyright2BTR1701
28 May 25      i`- Re: [OT] Why governments must limit AI violations of copyright1moviePig
28 May 25      `- Re: [OT] Why governments must limit AI violations of copyright1moviePig

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal