AI & TechArtificial IntelligenceBusinessNewswireTechnology

Who Polices Police AI? Perplexity’s Deal Alarms Experts

▼ Summary

– Perplexity AI has launched a free program for public safety organizations, offering its Enterprise Pro tier to help officers analyze data and automate tasks like report writing.
– Experts warn that using AI for seemingly mundane law enforcement tasks is risky, as minor errors or hallucinations in reports can lead to serious consequences like wrongful convictions.
– Perplexity claims its technology is designed for accuracy by post-training other companies’ models to minimize hallucinations, but independent studies have found it still has significant issues.
– There is debate over responsibility, with some arguing police must ensure wise use, while others believe policymakers must establish legal requirements for AI in law enforcement.
– Perplexity’s initiative is likely the first of many, as AI developers seek to expand into law enforcement, a sector with a history of adopting new technologies like predictive policing.

The recent launch of a specialized AI program for police departments has ignited a critical debate about accountability and safety in high-stakes environments. Perplexity for Public Safety Organizations, which offers its Enterprise Pro service free for a year to qualifying agencies, aims to help officers analyze crime scene photos, summarize body camera footage, and generate reports. While presented as a tool for efficiency, this move into law enforcement underscores a pressing dilemma: who ensures these powerful but imperfect systems are used responsibly when people’s liberty is on the line?

At first glance, the applications seem routine. Automating administrative tasks like drafting reports from investigators’ notes could save valuable time. However, experts warn that these “mundane” uses are precisely where hidden dangers lie. Katie Kinsey of the Policing Project points out that such tasks form the foundational work leading to charges and indictments. An error in a summarized transcript or a hallucinated detail in a report could have devastating downstream effects, potentially contributing to wrongful convictions. The legal profession has already seen cautionary tales, with attorneys sanctioned for using AI that fabricated case precedents in court filings.

Perplexity emphasizes its focus on accuracy, stating it fine-tunes existing AI models to minimize false information. Yet, like all large language models, it is not infallible. Independent evaluations have found that Perplexity and other leading chatbots can still generate responses with significant issues concerning accuracy or sourcing. This inherent fallibility raises a central question: in the absence of flawless AI, where does the ultimate responsibility rest for its use in policing?

Legal scholar Andrew Ferguson argues the burden falls on the police departments themselves. When constitutional rights and personal liberty are involved, the obligation is on the users to implement strong safeguards and verify all outputs. Without specific laws governing AI in law enforcement, agencies must exercise extreme caution. Conversely, Kinsey believes policymakers must step in, noting the current lack of “hard law” setting necessary requirements and standards for these tools.

This initiative is likely a harbinger of trends to come. The AI industry is under immense pressure to grow, and police departments have a history as early adopters of new technology, from predictive policing algorithms to facial recognition. The relationship between private tech companies and law enforcement is well-established, and other AI developers may soon offer similar programs to secure a stable, influential client base. This competitive rush could outpace the development of crucial oversight, leaving critical gaps in accountability.

The core issue remains unresolved. Until AI systems can be guaranteed free from harmful inaccuracies, a prospect experts doubt is even possible, their integration into sensitive fields demands rigorous scrutiny. Whether through stringent internal police protocols, comprehensive new legislation, or a combination of both, establishing clear guardrails is not just a technical necessity but a fundamental requirement for justice.

(Source: ZDNET)

Topics

ai in policing 95% ai hallucination 85% public safety ai 85% Ethical AI 80% ai use cases 80% law enforcement technology 80% ai regulation 75% ai accuracy 75% legal responsibility 75% ai risk management 70%