Hundreds of Creatives Sound Alarm on AI’s “Slop” Future

▼ Summary
– Around 800 artists, writers, actors, and musicians have signed a campaign called “Stealing Isn’t Innovation,” accusing AI companies of large-scale unauthorized use of creative content.
– The campaign, led by the Human Artistry Campaign, calls for licensing agreements, enforcement, and the right for artists to opt out of having their work train AI models.
– It argues this unauthorized use creates an ecosystem of misinformation and low-quality AI content, threatening America’s AI competitiveness.
– At the federal level, there are attempts to control state AI regulation, while at the industry level, licensing deals between tech companies and rights holders are becoming more common.
– Examples include major record labels partnering with AI music startups and digital publishers establishing standards or signing individual deals to license or block their content.
A broad coalition of nearly 800 prominent artists, writers, actors, and musicians has launched a forceful new campaign, labeling the practices of major AI companies as “theft at a grand scale.” This initiative, titled “Stealing Isn’t Innovation,” features an impressive roster of signatories including authors George Saunders and Jodi Picoult, actors Cate Blanchett and Scarlett Johansson, and musicians from bands like R.E.M. and The Roots, alongside Billy Corgan. They argue that the race to dominate generative AI has led corporations to exploit creative works without permission or compensation, creating a dangerous precedent for the entire information ecosystem.
Organized by the Human Artistry Campaign, a coalition that includes the RIAA, professional sports unions, and performers’ unions like SAG-AFTRA, the effort will broadcast its message through full-page advertisements in major news outlets and across social media platforms. The core demands are clear: establish fair licensing agreements, ensure robust legal enforcement of intellectual property rights, and guarantee artists the right to opt out of having their work used to train AI models. The campaign’s statement warns that the current path leads to an internet flooded with low-quality “AI slop,” rampant misinformation, and deepfakes, which could ultimately cause AI model collapse and undermine national competitiveness.
This advocacy arrives amid a complex and evolving landscape where solutions are being tested at both the industry and governmental levels. On one front, there is significant political maneuvering concerning federal versus state authority to regulate AI. Concurrently, a notable shift is occurring in the private sector. Tech companies and rights holders, historically at odds, are increasingly negotiating licensing agreements. This approach provides a pragmatic, if interim, solution that grants AI firms access to protected content while ensuring creators are paid.
Examples of this trend are emerging across creative industries. Major record labels, for instance, are now partnering with AI music startups, offering their vast catalogs for model training and AI-assisted remixing. In publishing, while some companies have pursued litigation, many are rallying behind new technical standards that allow outlets to block their content from AI search results. Several prominent digital publishers have also entered into individual licensing deals with technology firms, permitting AI chatbots to summarize and surface their news reporting. This move toward licensed content reflects a growing consensus that collaboration, rather than pure confrontation, may define the future relationship between artificial intelligence and human creativity.
(Source: The Verge)


