DOT’s AI Safety Rules Spark “Wildly Irresponsible” Concerns

▼ Summary
– The US Department of Transportation is considering using AI, specifically Google Gemini, to draft safety regulations for transportation systems like airplanes and cars.
– Critics and some DOT staffers are concerned because AI can generate incorrect or fabricated information, potentially leading to flawed and dangerous rules.
– The DOT’s top lawyer argues the goal is not perfection but speed, aiming to reduce rule drafting from months to within 30 days.
– The lawyer stated the agency only seeks a “good enough” rule, not a perfect or even very good one, to accelerate the process.
– While some experts suggest AI could assist as a supervised research tool, DOT staffers remain deeply skeptical about its use for drafting critical regulations.
A recent investigation has uncovered that the U.S. Department of Transportation (DOT) is actively exploring the use of artificial intelligence to draft critical safety regulations for the nation’s aviation, automotive, and pipeline sectors. This move, aimed at dramatically accelerating a traditionally slow process, has ignited significant internal debate and external concern over the potential for AI-generated errors to become codified into law. Proponents within the agency argue that the technology can efficiently handle routine bureaucratic language, but critics warn that relying on systems prone to “hallucinations” for safety-critical documents is a dangerous gamble.
Internal meeting notes from December reveal a stark division in perspective. The DOT’s chief legal counsel, Gregory Zerzan, advocated for the approach, emphasizing speed over perfection. He reportedly told staff that the goal is not to produce flawless regulations but to achieve a “good enough” standard rapidly. Zerzan highlighted the agency’s preferred tool, Google Gemini, noting its ability to draft a rule framework in under thirty minutes, a process that typically takes human staff weeks or even months. This push for efficiency, however, has left many career staffers deeply uneasy.
Several DOT employees, granted anonymity to speak freely, expressed profound skepticism about the reliability of AI for this sensitive task. Their primary fear is that an uncorrected error or fabricated citation from the AI could slip into a final rulemaking document. Such a flaw could render the regulation legally vulnerable, lead to costly litigation, and, most alarmingly, potentially result in real-world injuries or fatalities if a defective safety standard is implemented. Experts monitoring AI in government contexts have suggested that tools like Gemini could serve as useful research assistants, but only with rigorous human oversight and complete transparency throughout the drafting process.
The tension was further illustrated in agency presentations where staff were told that much of the explanatory text, or “preamble,” in regulatory documents is essentially “word salad.” The implied suggestion was that an AI language model is perfectly suited to generate such boilerplate content. This rationale has done little to assuage the concerns of veteran regulators, who argue that every clause in a safety rule, even the explanatory sections, must be precise and factually airtight. The debate centers on a fundamental question: whether the pursuit of bureaucratic efficiency justifies accepting a lower standard of accuracy in rules designed to protect public safety on roads, in the skies, and along the nation’s energy infrastructure.
(Source: Ars Technica)





