ICE and CBP Face-Recognition App Fails at Identity Verification

▼ Summary
– The Mobile Fortify facial recognition app was deployed by DHS for immigration enforcement without the standard privacy reviews and after internal limits on such technology were removed.
– The app is designed to generate investigative leads, not to reliably verify identities, a known limitation that contradicts DHS’s framing of the tool.
– DHS has used the app to scan the faces of U.S. citizens, protesters, and bystanders, not just targeted individuals, with limited transparency about its methods.
– Field use illustrates the tool’s unreliability, with agents basing stops on factors like accent or ethnicity and then using inconclusive face scan results as part of probable cause.
– The technology enables the nonconsensual capture of facial biometrics far from the border, with its operation mainly revealed through lawsuits and court testimony.
A facial recognition application deployed by U.S. immigration authorities for street-level identity checks suffers from significant reliability issues and was implemented without standard privacy oversight. The Mobile Fortify app, used by Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP), is not designed to reliably identify individuals in the field, according to internal records. The Department of Homeland Security launched the tool to support a broader enforcement strategy targeting undocumented immigrants, yet the technology itself cannot definitively verify a person’s identity.
Experts point out that facial recognition systems are inherently limited. These technologies are intended only to generate investigative leads, not to provide positive identification, a critical distinction often overlooked in field operations. The rapid approval of Mobile Fortify was facilitated by internal policy changes that dismantled centralized privacy reviews and removed department-wide restrictions on facial recognition use. These changes were overseen by a senior DHS privacy official with ties to conservative policy groups.
In practice, agents have used the app to scan the faces of not only targeted individuals but also U.S. citizens and bystanders at protest events. Reports indicate agents have informed people they are being recorded and that their biometric data will be stored without consent. Encounters have been escalated based on perceived ethnicity or accent, with facial scanning then used as a subsequent step, illustrating a shift toward biometric capture during routine street-level enforcement with minimal transparency.
The technology enables the creation of nonconsensual facial templates, or “face prints,” of individuals far from any border, including citizens and lawful permanent residents. Details of its functionality have emerged largely through legal proceedings and agent testimonies. In one federal lawsuit, attorneys revealed the app has been used in the field over 100,000 times since its launch.
A specific case in Oregon highlights the tool’s unreliability. An agent testified that two photos of a handcuffed woman, taken after he physically repositioned her, returned two different potential identities. The first was rated “a maybe.” When the woman did not respond to the name provided, a second photo was taken, yielding another “possible” match. The agent admitted he did not know the system’s confidence level for either result, stating that evaluating a match required manually comparing facial features in the images. The agent cited the woman speaking Spanish, her association with others presumed to be noncitizens, and a “possible match” from the app as the basis for probable cause, underscoring how unverified algorithmic suggestions can influence enforcement actions.
(Source: Wired)




