AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Grammarly’s AI Feature Sparks Class Action Lawsuit

▼ Summary

– Superhuman, the company behind Grammarly, is facing a class action lawsuit for its “Expert Review” AI tool, which used the names of authors and academics like Julia Angwin without their consent.
– The lawsuit, filed in New York, alleges the company misappropriated identities for profit and seeks to stop this practice, with claimed damages exceeding $5 million for the plaintiff class.
– Superhuman has already disabled the feature, stating it “missed the mark” and will reimagine it to give experts control over their representation.
– The tool used an AI model to present critiques as if from specific writers, which frustrated many professionals who felt their work and likeness were appropriated.
– The legal argument centers on laws in New York and California that prohibit the unauthorized commercial use of a person’s name and likeness.

The company behind the popular writing assistant Grammarly is now confronting a significant legal challenge. A class action lawsuit alleges that the firm, Superhuman, improperly used the names and reputations of hundreds of authors, journalists, and academics within an AI feature without their consent. The suit, filed in federal court, argues this constitutes a clear misappropriation of identity for commercial gain, seeking to halt the practice and secure damages exceeding five million dollars for the affected class.

Julia Angwin, an award-winning investigative journalist and founder of the nonprofit newsroom The Markup, is the lead plaintiff in the case. She discovered that Grammarly’s “Expert Review” tool presented her as a virtual editor offering suggestions on user text, alongside other notable figures like author Stephen King and astrophysicist Neil deGrasse Tyson. None of these individuals provided permission for their names or professional identities to be used in this manner. The legal complaint contends that Superhuman profited from this unauthorized use of personal brands and hard-earned reputations.

This legal action arrives as Superhuman had already announced it would discontinue the controversial feature following public criticism. A company representative stated they were disabling “Expert Review” to reconsider its design, aiming to provide users real control over their representation. The statement acknowledged the feature had “missed the mark” and apologized, promising a different approach moving forward. The feature, part of a suite of AI tools added last year, used a large language model to simulate critiques from various experts, living or deceased, accompanied by a disclaimer noting the lack of direct endorsement.

However, for many professionals, that disclaimer proved insufficient. Writers and journalists expressed frustration that the tool appeared to summarize and regurgitate their life’s work while invoking their likeness. Angwin’s attorney, Peter Romer-Friedman, points to established laws in New York and California that forbid using a person’s name or likeness for commercial purposes without approval. He describes the case as legally straightforward, but also frames it as a necessary response to a broader trend where professional skills and identities are appropriated without consent.

Angwin, who has written extensively on digital privacy, noted her surprise upon learning from a tech newsletter that her professional identity had been cloned for this tool. She remarked that she typically associated such “deepfake” scenarios with celebrities, not working journalists. The lawsuit itself emphasizes that legal protections apply to everyone, regardless of fame. It seeks a court order to prevent Grammarly from continuing to trade on these names and from attributing fabricated statements and advice to individuals who never provided it.

(Source: Wired)

Topics

class action lawsuit 98% grammarly controversy 97% misappropriation of identity 96% AI ethics 95% expert review tool 94% julia angwin 93% superhuman company 92% public backlash 89% legal compliance 88% deepfake concerns 86%