UK police used AI “hallucination” to ban football fans

▼ Summary
– West Midlands Police admitted their controversial decision to ban Maccabi Tel Aviv fans from a UK football match was based on hallucinated information from Microsoft Copilot, after weeks of denial.
– The ban was recommended by police to a safety group, citing fears of violence in Birmingham partly due to heightened tensions from a recent synagogue terror attack in Manchester.
– Police specifically justified the ban by claiming Maccabi fans had been violent at a prior match in Amsterdam, including assaults and throwing people into a river.
– The decision sparked political controversy, with critics arguing it unfairly targeted Jewish fans despite Islamic terror being the more serious source of violence.
– The police narrative collapsed as their claims about the Amsterdam incident were inconsistent, vastly inflating the number of officers needed and the scale of fan violence.
The decision to bar Maccabi Tel Aviv supporters from a high-profile football match in Birmingham has ignited a prolonged controversy, centering on the revelation that police relied on demonstrably false information generated by an artificial intelligence tool. After weeks of denial, the chief constable of West Midlands Police conceded that the force used Microsoft Copilot during safety planning, which produced a fabricated account of fan violence that significantly influenced the ban.
In October 2025, Birmingham’s Safety Advisory Group convened to assess security for an upcoming Europa League match between Aston Villa and Maccabi Tel Aviv. The meeting occurred in a tense climate following a deadly terrorist attack on a Manchester synagogue earlier that month. West Midlands Police, a key member of the advisory group, strongly advocated for prohibiting away fans, arguing the fixture could incite serious disorder in the city.
To support their position, officers presented a detailed and alarming narrative. They claimed that during a recent match in Amsterdam, between 500 and 600 Maccabi Tel Aviv fans had targeted Muslim communities, committing serious assaults and even throwing members of the public into a canal. The police also stated that an extraordinary force of 5,000 officers was required to manage the ensuing unrest, revising an earlier estimate of 1,200. This testimony was pivotal in the group’s decision to recommend a ban.
The move immediately sparked a fierce political and public backlash. Many Jewish groups and political commentators argued it constituted an unfair collective punishment, banning Jewish fans while the more immediate security threat stemmed from Islamist terrorism. The match proceeded on November 6 in an empty stadium, but the debate over the justification for the ban only intensified in the following months.
The police case unraveled when journalists investigated the claims. A BBC report found no evidence to support the alleged incidents in Amsterdam. Dutch authorities confirmed there had been no major disorder involving Maccabi fans, no reports of people being thrown into water, and no deployment of 5,000 officers. The detailed account was a complete fabrication. Faced with this evidence, the police admitted they had used an AI chatbot to research historical fan behavior. The tool had “hallucinated” the entire scenario, generating convincing but entirely fictitious details that were then presented as fact in a critical safety assessment.
This incident raises profound questions about the unverified use of AI in sensitive law enforcement and public safety decisions. It demonstrates how easily algorithmic errors can be integrated into official processes, leading to real-world consequences that affect civil liberties and public trust. The episode serves as a stark warning for institutions to implement rigorous verification protocols when utilizing emerging technologies, ensuring human oversight remains the final arbiter of truth.
(Source: Ars Technica)
