Deloitte to Repay Australia for AI-Flawed Report

▼ Summary
– Deloitte Australia will partially refund the Australian government for a report containing AI-generated false quotes and references to nonexistent research.
– The $440,000 AUD report assessed the technical framework for automating welfare penalty systems and was published by the Department of Employment and Workplace Relations.
– University of Sydney law professor Lisa Burton Crawford confirmed that the report falsely attributed nonexistent research to her, demanding an explanation from Deloitte.
– Deloitte and the DEWR released an updated version of the report to correct errors in references and footnotes without initially highlighting the changes.
– The updated report revealed on page 58 that Deloitte used a generative AI tool (Azure OpenAI GPT-4o) to help map system code to business requirements.
Deloitte Australia has agreed to provide a partial refund to the Australian government following the discovery of fabricated citations and references in a taxpayer-funded report. The consulting firm’s “Targeted Compliance Framework Assurance Review,” which cost approximately $440,000 AUD, was intended to evaluate the automated penalty system used within Australia’s welfare framework. However, the document included references to non-existent academic papers and reports, raising serious questions about the quality and reliability of the work delivered.
After the report was published by the Department of Employment and Workplace Relations (DEWR), experts quickly identified multiple citations that could not be verified. Among the errors were references to research attributed to Lisa Burton Crawford, a professor at the University of Sydney Law School, who confirmed she had never produced the cited work. Professor Crawford expressed concern over the misattribution and called for Deloitte to clarify how such inaccuracies made their way into the final document.
In response, both Deloitte and the DEWR issued a revised version of the report, describing the changes as addressing “a small number of corrections to references and footnotes.” The updated document, spanning 273 pages, includes a note on page 58 acknowledging the use of a generative AI large language model, specifically Azure OpenAI GPT-4o, as part of the technical analysis process. According to the note, the AI tool was employed to help assess whether system code aligned with business and compliance requirements. This admission highlights the risks associated with relying on artificial intelligence for producing authoritative or legally sensitive documentation.
The incident has drawn attention to the broader implications of integrating AI into government and compliance-related projects. While AI tools can enhance efficiency, their potential for generating inaccurate or entirely fictional content, often referred to as “hallucinations”, poses a significant challenge. For public sector contracts, where accuracy and accountability are paramount, such errors can undermine trust and lead to financial and reputational repercussions.
Deloitte’s decision to refund a portion of the fee reflects both an acknowledgment of the flaws in the report and a commitment to addressing the government’s concerns. Moving forward, the situation may prompt stricter guidelines on the use of generative AI in official audits and reviews, ensuring that human oversight and validation remain central to the process.
(Source: Ars Technica)