Why I Changed My Mind About Getting Paid by Anthropic

▼ Summary
– Anthropic agreed to a $1.5 billion settlement for authors and publishers after a judge ruled it pirated books to train its AI model Claude.
– The settlement grants authors a minimum of $3,000 per book but does not resolve the broader issue of whether AI companies can legally use copyrighted works for training.
– The author, who is both a writer and a member of the Author’s Guild, supports compensating authors and acknowledges his personal stake in the outcome.
– Books are considered highly valuable for AI training due to their length, coherence, and depth, making them superior to sources like social media or news articles.
– Fair use laws may protect AI companies, but the legal framework is outdated, and prominent figures like Donald Trump argue against paying authors due to practical difficulties.
A billion dollars might not stretch as far as it once did, but it still commands attention. When news broke that Anthropic agreed to a $1.5 billion settlement for authors and publishers whose books were used without permission to train its AI model Claude, it certainly captured mine. This development followed a judicial ruling that the company had engaged in piracy by using copyrighted material. Under the proposed agreement, still under judicial review, authors could receive a minimum of $3,000 per affected book. With eight books of my own and five from my spouse, that kind of compensation starts to look like serious money.
While the settlement addresses past infringement, it skirts the larger ethical question: should AI firms be allowed to train their models on copyrighted works without compensating creators? What makes this moment pivotal is the shift from abstract debates to tangible financial consequences. The core issue remains unignorable: if high-performing AI depends on book content, is it equitable for corporations to amass vast fortunes without fairly paying the authors who supply that essential input?
Personally, I’ve wrestled with this dilemma. But seeing dollars actually put on the table has clarified my perspective. I believe authors deserve to be paid. It’s a matter of basic fairness, despite opposition from influential voices, including former President Donald Trump, who argue the opposite.
Let me pause here to offer full transparency. As an author myself, I have a vested interest in how this plays out. I also serve on the council of the Author’s Guild, which actively advocates for writers and is currently engaged in lawsuits against OpenAI and Microsoft for using copyrighted works in AI training. (I recuse myself from votes involving litigation against tech companies due to my professional coverage of the industry.) These are my personal views.
For a long time, I was something of an outlier among my peers, genuinely conflicted about whether companies should be permitted to train AI on books they’ve legally acquired. The idea of contributing to a collective repository of human knowledge appeals to me. In a 2023 conversation, musician Grimes captured this sentiment perfectly, expressing excitement about the possibility of her work living on through AI. I understood that thrill, the desire to extend one’s influence is a powerful motivator for any creator.
However, having your work absorbed into a corporate AI system feels fundamentally different. Books represent the most valuable training data available to large language models. Their depth, coherence, and breadth offer an unparalleled education in human expression and reasoning. Unlike social media snippets or news articles, books provide sustained narratives and complex ideas that are essential for developing sophisticated AI. It’s fair to say that without books, today’s AI would be significantly less capable.
Given their indispensable role, one could argue that companies like OpenAI, Google, Meta, and Anthropic ought to pay a premium for book access. At a recent White House tech dinner, executives boasted about investments in U.S. data centers to support AI infrastructure, with figures reaching hundreds of billions of dollars. Compared to those eye-watering sums, Anthropic’s $1.5 billion settlement seems almost modest.
Legally, these companies may have a strong case. Copyright law includes a “fair use” provision that allows limited use of protected material without payment, particularly if the new work is “transformative” and doesn’t harm the market for the original. The judge in the Anthropic case has already indicated that training on legally obtained books may qualify as fair use. But applying decades-old legal standards to AI feels increasingly strained and out of touch.
What’s needed is a modern framework that reflects how technology operates today. The White House’s recent AI Action Plan didn’t provide one. In remarks about the plan, Donald Trump argued against compensating authors, claiming it would be impractical to create a fair payment system. His administration has signaled that this stance may inform official policy. Yet if AI companies can invest half a trillion dollars in hardware, surely they can devise a way to pay for the content that makes their systems worthwhile.
(Source: Wired)





