Grammarly Expert Sues Over AI Identity Theft

▼ Summary
– Grammarly is facing a class-action lawsuit for using real people’s identities in its “Expert Review” AI feature without their permission.
– The lawsuit, filed by journalist Julia Angwin, alleges the company violated privacy and publicity rights by using identities for commercial purposes without consent.
– Several journalists, including current Verge staff members, discovered their identities were being used by the AI tool without their knowledge.
– In response, Grammarly’s parent company, Superhuman, has disabled the feature and launched an opt-out process for affected individuals.
– The company’s CEO apologized, acknowledged the misstep, and stated they will “rethink our approach going forward.”
A journalist has filed a class-action lawsuit against Grammarly, alleging the company used her identity and the identities of other professionals to power its AI “Expert Review” feature without obtaining their consent. The complaint, filed by Julia Angwin, accuses the company of violating privacy and publicity rights by leveraging personal identities for commercial gain without permission. This legal action follows an investigation revealing that numerous journalists and public figures, including staff from The Verge, were featured as simulated experts within the AI tool.
The issue came to light when Angwin discovered her name was being used through a colleague, Casey Newton, who was also identified as one of the unauthorized experts. Testing of the Grammarly feature this week confirmed that several current Verge employees, including editor-in-chief Nilay Patel, appeared in the AI-generated suggestions. These digital profiles were presented to users as sources of authoritative writing advice, creating the impression of a personal endorsement or direct involvement that never occurred.
In response to the growing controversy, Grammarly announced it is disabling the Expert Review feature. The company had recently established an email inbox for writers and academics to request removal, but this opt-out process was initiated only after the feature had already launched and utilized people’s identities. CEO Shishir Mehrotra issued a statement acknowledging the misstep. He explained the agent was intended to help users discover influential perspectives and connect experts with their audience, but conceded the execution was flawed. Mehrotra offered an apology and stated the company would reconsider its strategy moving forward.
The lawsuit underscores significant legal and ethical questions surrounding the use of personal identity in training and deploying artificial intelligence systems. As AI tools become more integrated into consumer applications, the case highlights the urgent need for clear policies on consent and compensation. The outcome could set a precedent for how companies handle the digital likenesses of individuals, balancing innovation with the fundamental right to control one’s own name and reputation.
(Source: The Verge)





