Universities Helped Weaken New York’s Landmark AI Safety Bill

▼ Summary
– A coalition of tech companies and academic institutions, called the AI Alliance, spent thousands on ads opposing New York’s RAISE Act, an AI safety bill.
– The final version of the RAISE Act signed by Governor Hochul was a rewrite that removed stricter safety clauses and made it more favorable to the tech industry.
– The AI Alliance includes major companies like Meta and IBM, as well as universities such as NYU and Cornell, which were listed in the opposition campaign.
– The ad campaign argued the legislation would stifle job growth and innovation, despite the bill’s intent to mandate safety plans and transparency from AI developers.
– The AI Alliance has a history of lobbying against AI safety policies, and some of its academic members have direct partnerships or receive funding from major AI companies.
A coalition of technology firms and prominent universities invested significant funds in a targeted advertising campaign opposing New York’s pioneering artificial intelligence safety legislation. According to data from Meta’s Ad Library, the effort likely cost between $17,000 and $25,000 and potentially reached over two million people. This campaign coincided with critical negotiations that ultimately led to a substantially weakened version of the bill being signed into law by Governor Kathy Hochul.
The legislation, known as the RAISE Act (Responsible AI Safety and Education Act), was originally designed to impose safety and transparency requirements on companies developing powerful AI models. The final version signed by the governor, however, was a rewrite that removed key provisions and softened penalties, making it far more favorable to the tech industry. This outcome followed intense lobbying from the AI Alliance, a group whose members include major corporations like Meta, IBM, and Intel, as well as a roster of academic institutions such as New York University, Cornell University, and Carnegie Mellon University.
The alliance’s ad campaign, which launched in late November, carried the message that “The RAISE Act will stifle job growth,” arguing the bill would harm New York’s technology ecosystem. When contacted about their indirect involvement in lobbying against this safety legislation, most of the named universities did not respond. This silence highlights the increasingly complex relationships between academia and the AI industry. In recent years, companies like OpenAI and Anthropic have actively courted universities through research partnerships and by providing free access to their technologies for students and faculty.
While not all academic members of the AI Alliance have direct partnerships, several do. For instance, Northeastern University secured access to Anthropic’s Claude AI for tens of thousands across its global campuses. New York University received funding from OpenAI for a journalism ethics initiative, and a Carnegie Mellon professor sits on OpenAI’s board. These financial and strategic ties create potential conflicts of interest when institutions lend their names to political efforts that align with corporate agendas.
The original RAISE Act contained a crucial safety clause requiring developers to prevent the release of any frontier model posing an “unreasonable risk of critical harm,” defined as events causing mass casualties or extreme financial damage. Governor Hochul’s signed version eliminated this core provision entirely, while also extending disclosure deadlines and reducing potential fines for violations. The AI Alliance had previously expressed “deep concern” and labeled the initial bill “unworkable,” lobbying against it and similar policies in California and at the federal level.
Although the Alliance presents itself as a nonprofit focused on collaborative and ethical AI development, its lobbying efforts mirror those of more overt political groups. Another organization, a pro-AI super PAC called Leading the Future, also funded ads targeting the bill’s co-sponsor. The involvement of universities in such politically charged campaigns raises significant questions about institutional neutrality and the influence of corporate funding on academic positions in the critical debate over AI regulation and public safety.
(Source: The Verge)





