Artificial IntelligenceBigTech CompaniesNewswireTechnology

Silicon Valley’s AI Moves Alarm Safety Experts

▼ Summary

– Silicon Valley leaders accused AI safety advocates of acting in self-interest or for hidden billionaires, sparking online controversy.
– AI safety groups claim these accusations are intimidation tactics, citing past misinformation campaigns against safety regulations like California’s SB 1047.
– OpenAI sent subpoenas to AI safety nonprofits, alleging coordination with Elon Musk, which critics view as an attempt to silence opposition.
– David Sacks specifically targeted Anthropic, alleging it uses fearmongering to push regulations that harm startups, despite Anthropic’s endorsement of safety laws.
– The conflict highlights a growing tension between rapid AI development for profit and responsible AI safety measures, with safety advocates gaining momentum.

Recent statements from prominent figures in Silicon Valley have ignited a firestorm of debate around artificial intelligence safety and regulation. White House AI advisor David Sacks and OpenAI’s Chief Strategy Officer Jason Kwon made public remarks suggesting that certain organizations advocating for AI safety may have hidden agendas, acting either for self-interest or at the behest of wealthy backers. These comments have drawn sharp criticism from the AI safety community, who view them as tactics designed to intimidate and silence legitimate oversight efforts.

Groups focused on AI safety told reporters this is not the first time the tech industry has pushed back against critics. Earlier this year, some venture capital firms circulated claims that a proposed California AI safety bill, SB 1047, could lead to criminal charges against startup founders. While the Brookings Institution later characterized these claims as misrepresentations, Governor Gavin Newsom ultimately vetoed the legislation.

Regardless of intent, the recent allegations have had a chilling effect. Several leaders of nonprofit safety organizations requested anonymity when speaking to the press, citing fears of retaliation against their groups. This situation highlights a deepening conflict within the tech world between the drive to develop AI responsibly and the pressure to rapidly scale it into a ubiquitous consumer product.

On Tuesday, David Sacks posted on the social platform X, accusing the AI lab Anthropic of fearmongering to advance its own regulatory interests. He claimed the company’s public warnings about AI’s potential to cause unemployment, cyberattacks, and societal harm are part of a calculated strategy to push for regulations that would burden smaller competitors with compliance costs. Anthropic was notably the sole major AI lab to support California’s Senate Bill 53, a new law signed last month that mandates safety reporting for large AI firms. Sacks was responding to a widely-shared essay by Anthropic co-founder Jack Clark, which detailed his personal apprehensions about the technology’s trajectory.

In his post, Sacks described Anthropic’s actions as a “sophisticated regulatory capture strategy.” He later added that the company has consistently positioned itself in opposition to the Trump administration, a move he implied could backfire.

Separately, OpenAI’s Jason Kwon explained his company’s decision to issue subpoenas to several AI safety nonprofits, including Encode, an organization promoting responsible AI policy. Kwon stated that after Elon Musk filed a lawsuit alleging OpenAI had strayed from its original nonprofit mission, it seemed suspicious that multiple groups simultaneously voiced opposition to OpenAI’s corporate restructuring. Encode had filed a legal brief supporting Musk’s case, and other nonprofits publicly criticized OpenAI’s changes.

“This raised transparency questions about who was funding them and whether there was any coordination,” Kwon remarked.

According to recent news reports, OpenAI sent broad subpoenas to Encode and six other critical nonprofits, demanding their communications concerning Musk and Meta’s CEO Mark Zuckerberg, two of OpenAI’s most significant detractors. OpenAI also requested Encode’s communications regarding its support for SB 53.

Internally, there appear to be growing tensions within OpenAI. One well-placed source indicated a divide is emerging between the company’s government affairs team and its research division. While OpenAI’s safety researchers regularly publish studies on AI risks, its policy unit actively lobbied against SB 53, preferring federal regulations over state-level rules.

The subpoena decision even prompted concern from within OpenAI. Joshua Achiam, the company’s head of mission alignment, posted on X, “At what is possibly a risk to my whole career I will say: this doesn’t seem great.”

Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI, believes OpenAI’s actions are meant to silence critics. “On OpenAI’s part, this is meant to silence critics, to intimidate them, and to dissuade other nonprofits from doing the same,” he said. He added that for Sacks, the motivation seems to be a concern that the AI safety movement is gaining traction and public support for holding companies accountable.

Adding another perspective, White House senior policy advisor for AI, Sriram Krishnan, suggested that AI safety advocates are out of touch with the general public. He urged these organizations to engage more with everyday people who are using and adopting AI in their homes and workplaces.

Public opinion research sheds some light on these tensions. A recent Pew study found nearly half of Americans are more concerned than excited about AI. Another detailed study revealed that voters are primarily worried about job losses and the proliferation of deepfakes, rather than the catastrophic risks that often dominate the AI safety conversation.

Addressing these widespread public safety concerns could potentially slow the breakneck speed of AI development, a trade-off that alarms many in the investment and tech communities. With AI investment becoming a significant pillar of the U.S. economy, the fear of heavy-handed regulation is palpable.

However, after years of relatively unchecked advancement, the movement for AI safety is gathering momentum as 2026 approaches. The fact that Silicon Valley powerhouses are now actively pushing back against these groups may be the clearest indicator yet that their efforts are starting to have a real impact.

(Source: TechCrunch)

Topics

ai safety 95% silicon valley 90% openai controversy 88% industry intimidation 87% regulatory capture 85% industry tensions 85% nonprofit organizations 83% california legislation 82% safety movement 82% government regulation 80%