Artificial IntelligenceCybersecurityNewswireTechnology

CBP Partners with Clearview AI to Expand Facial Recognition Use

▼ Summary

– U.S. Customs and Border Protection is spending $225,000 for a year of access to Clearview AI’s facial recognition tool, which uses billions of images scraped from the internet.
– The contract extends access to intelligence units for “tactical targeting” and “strategic counter-network analysis,” embedding it into daily intelligence work.
– The agreement lacks specifics on what photos agents will upload, whether U.S. citizens will be searched, or how long data will be retained, raising transparency concerns.
– The technology faces scrutiny from lawmakers and civil liberties groups over its routine use and expansion without clear limits or public consent.
– Federal testing shows the systems have high error rates in real-world settings like border crossings and cannot reduce false matches without also missing correct identifications.

A new contract reveals that U.S. Customs and Border Protection is expanding its use of facial recognition technology, allocating $225,000 for a year of access to Clearview AI’s powerful search tools. This agreement provides the agency’s headquarters intelligence division and National Targeting Center with the ability to compare photos against a database of over 60 billion images collected from across the internet. The stated purpose is to support “tactical targeting” and “strategic counter-network analysis,” integrating the tool into the daily workflow of analysts working to identify security threats.

The contract extends Clearview’s capabilities to units focused on intelligence and data analysis, embedding the technology into routine operations rather than limiting it to specific, isolated cases. CBP’s intelligence units utilize a variety of sources, including commercial tools and public data, to map connections between individuals for national security and immigration enforcement. The agreement includes provisions for handling sensitive biometric data and requires nondisclosure agreements for contractors, but it leaves several critical questions unanswered. The document does not specify what types of photos agents will submit, whether searches will include U.S. citizens, or how long the agency will retain search results and uploaded images.

This expansion occurs amid growing scrutiny of facial recognition use within the Department of Homeland Security. Critics, including civil liberties groups and some lawmakers, argue that such tools are becoming standard intelligence infrastructure without adequate safeguards, transparency, or public consent. Their deployment has extended far beyond border areas into large-scale operations within U.S. cities. Senator Ed Markey recently introduced legislation seeking to ban Immigration and Customs Enforcement and CBP from using facial recognition technology entirely, citing concerns over unchecked biometric surveillance.

Clearview AI’s foundational practice of scraping billions of photos from public websites without consent has consistently drawn controversy. The company’s technology appears in a DHS inventory of artificial intelligence projects, linked to a CBP pilot program initiated last October. While this pilot is associated with the Traveler Verification System used at ports of entry, CBP’s own privacy documentation states that system does not use commercial or publicly sourced data. Analysts suggest the Clearview access is more likely integrated with CBP’s Automated Targeting System, a vast platform that links biometric databases, watchlists, and enforcement records, including those from domestic operations far from any physical border.

Independent testing highlights significant limitations in the real-world application of this technology. Recent evaluations by the National Institute of Standards and Technology found that while facial recognition systems can perform accurately with high-quality, visa-style photos, their reliability drops sharply in less controlled environments. For images not originally intended for automated recognition, such as those captured at border crossings, error rates frequently exceeded 20 percent, even with the most advanced algorithms. This testing underscores a fundamental trade-off: systems cannot reduce false matches without simultaneously increasing the risk of missing the correct individual.

Consequently, agencies often use the software in an investigative mode, where it generates a ranked list of potential matches for a human analyst to review, rather than delivering a single, definitive identification. However, a critical flaw emerges when searching for individuals not present in the database. If the system is set to always return candidate results, such searches will still produce a list of potential “matches” for review. In these instances, the results are invariably and completely incorrect, posing a clear risk of misidentification. CBP did not provide immediate comment on how it will integrate Clearview or address these operational and ethical questions.

(Source: Wired)

Topics

facial recognition 100% government surveillance 95% privacy concerns 90% biometric data 85% border security 85% civil liberties 80% data scraping 75% legislative scrutiny 70% technology contracts 70% National Security 65%