Hunger Strike Demands: End AI Development Now

▼ Summary
– Guido Reichstadter is on a hunger strike outside Anthropic’s headquarters to demand the company halt its pursuit of artificial general intelligence (AGI), which he views as an existential risk.
– He and others in the AI safety community believe AGI development is reckless, citing Anthropic CEO Dario Amodei’s own estimate of a 10-25% chance of catastrophic outcomes.
– Reichstadter’s protest has inspired similar hunger strikes outside Google DeepMind’s office in London and in India, with participants sharing concerns about AI’s dangers and calling for regulation.
– Protesters have sent letters to AI company CEOs requesting a halt to AGI development and international coordination on pausing frontier AI models, but have not yet received responses.
– AI companies like Anthropic and Google DeepMind maintain that safety and responsible governance are priorities, while employees express mixed views on risks and corporate responsibility.
On day seventeen of his hunger strike, Guido Reichstadter reports feeling reasonably well, though his movements have noticeably slowed. Since early September, he has maintained a daily vigil outside the San Francisco headquarters of Anthropic, an AI research company, holding a sign that reads “Hunger Strike: Day 15” and demanding an immediate halt to the pursuit of artificial general intelligence (AGI). Reichstadter, who stopped eating on August 31st, believes AGI represents an unacceptable existential threat to humanity.
AGI refers to AI systems that could match or exceed human cognitive abilities, a goal enthusiastically pursued by many tech leaders. Reichstadter views this ambition as profoundly dangerous and irresponsible. In a recent interview, he stated, “Trying to build human-level or superintelligent systems is the explicit aim of these frontier companies. I consider it insane. The risks are enormous, and it has to stop now.” He sees his hunger strike as one of the few methods capable of capturing the attention of those driving AI development.
He points to a 2023 statement by Anthropic CEO Dario Amodei, who estimated a 10 to 25 percent chance of “something going catastrophically wrong on the scale of human civilization.” While Amodei and others argue that AGI development is inevitable and that their role is to steward it responsibly, Reichstadter dismisses this as a self-serving myth. He insists that corporations have a moral duty to avoid creating technologies that could cause widespread harm.
“I’m just an ordinary citizen trying to fulfill my responsibility,” he explained. “I have respect for the lives and wellbeing of my fellow citizens, and I’m a father of two.” Anthropic has not publicly responded to his protest or requests for comment.
Each day, Reichstadter greets security personnel as he sets up his demonstration. He observes Anthropic employees avoiding eye contact as they pass, though he says at least one has privately expressed shared concerns about potential catastrophe. He hopes to inspire workers within AI companies to act according to their conscience, reminding them that they are developing what he calls “the most dangerous technology on Earth.”
His concerns are far from isolated. The broader AI safety community, though diverse and often divided on specifics, largely agrees that current development trajectories pose serious risks to humanity. Reichstadter first became aware of AGI’s potential during his college years but says the 2022 release of ChatGPT made the threat feel immediate. He is particularly troubled by what he sees as AI’s role in amplifying authoritarian tendencies.
“I worry about my society, my family, and their future,” he said. “AI isn’t being used ethically, and it introduces catastrophic, even existential, risks.”
In recent months, Reichstadter has escalated his efforts to draw attention to these dangers. He previously collaborated with the group “Stop AI,” which advocates for a permanent ban on superintelligent systems to prevent human extinction and mass disruption. Earlier this year, he was among those arrested after chaining shut the doors to OpenAI’s San Francisco office.
On September 2nd, he delivered a handwritten letter to Amodei via Anthropic’s security desk, later publishing it online. In it, he urges the CEO to halt development of uncontrollable technology and use his influence to stop the global AI race. “For the sake of my children,” he wrote, “I have begun a hunger strike… while I await your response.”
He hopes for a direct, human reply. “It’s one thing to abstractly consider that your work might get people killed. It’s another to have one of those people standing in front of you, asking why.”
His action has inspired similar protests. In London, two supporters began a hunger strike outside Google DeepMind’s offices, while another joined via livestream from India. Michael Trazzi participated for seven days in London before stopping on medical advice, but continues to support Denys Sheremet, now on day ten. Like Reichstadter, Trazzi fears the unchecked advancement of AI and has written to DeepMind CEO Demis Hassabis calling for a coordinated halt to superintelligence development.
Trazzi believes that without external regulation, market incentives will push AI into dangerous territory. “If AI weren’t so dangerous, I wouldn’t be so pro-regulation,” he noted. “But some technologies naturally steer in the wrong direction.”
In response, Google DeepMind’s communications director emphasized the company’s commitment to safety and responsible governance, stating that AI has the potential to “advance science and improve billions of people’s lives.” Still, Trazzi reports that his protest has sparked candid conversations with tech employees, including one from Meta who questioned why only Google was being targeted, noting, “We’re also in the race.” Another DeepMind employee allegedly admitted believing AI-related extinction was more likely than not but chose to work there because it was among the “most safety-conscious” firms.
Thus far, neither Reichstadter nor Trazzi has received a formal response from the CEOs they addressed. Both remain hopeful that their actions will lead to dialogue, accountability, and ultimately, a shift in direction.
Reichstadter summarizes the situation starkly: “We are in an uncontrolled, global race to disaster. If there’s a way out, it will require honesty, admitting we’re not in control, and asking for help.”
(Source: The Verge)