The development of generative artificial intelligence (AI) models in recent years is transforming digital technology, with some even asking if current AI advancements represent a “fourth industrial revolution.”1 However, as we enter this new era of technological advancement, there are unanswered questions about how generative AI models are developed and what effect they could have on society. Specifically, copyright owners and creator communities have significant concerns about what materials are being ingested for training and whether AI companies will be held liable for the mass unauthorized use of copyrighted works to build their generative models.
Seeking answers and accountability, copyright owners have now brought over forty copyright infringement lawsuits against AI companies.2 These cases, which have mostly been filed over the past two years, are winding their way through various federal courts and are all leading to one pivotal question: Does the ingestion of copyrighted works for generative AI training constitute direct infringement of copyright owners’ reproduction rights, or does it qualify as fair use?3
Thus, fair use is not just a big question—it is the only question that really matters in generative AI copyright infringement litigation. AI companies and their supporters argue that copying protected works to train AI models constitutes a transformative purpose that tips the scales in favor of fair use and that past fair use cases clearly support their position. However,
as this policy memo will show (and as courts and the United States Copyright Office are already recognizing), the fair use cases AI companies rely upon (1) are significantly undermined by the Supreme Court’s recent Warhol v. Goldsmith decision, (2) are, regardless of Warhol v. Goldsmith, readily distinguishable and do not set a precedent that generative AI training is fair use, and (3) in fact demonstrate that in most cases generative AI training does not qualify as fair use.
No comments:
Post a Comment