Wrongly imprisoned – when facial recognition decides people’s fate

An AI error cost an innocent woman nearly six months in jail for a crime she did not commit. Facial recognition identified her as a fraudster, and investigators failed to look beyond the match.

AI facial recognition technology at work on a crowd. Photo: John Lund/Getty Images

AI facial recognition technology at work on a crowd. Photo: John Lund/Getty Images

Fargo. The case of Angela Lipps, a previously unblemished 50-year-old grandmother from Tennessee, reads like something out of a dystopian film. Yet it is a stark reality. Despite clear evidence that she had never been to North Dakota, she was arrested on the basis of an algorithmic assessment and subsequently spent more than three months behind bars. The system had linked her identity to that of a fraudster. An unknown woman had withdrawn tens of thousands of dollars from other people’s bank accounts using a forged US military ID. Investigators uploaded surveillance images to an AI facial recognition system.

The software produced a match: Angela Lipps. Shortly afterwards, US Marshals arrived at her home and arrested her. As she was from another state, bail was denied. She remained in custody in Tennessee for months before being transferred to North Dakota. Only then was she brought before a court. Her public defender obtained bank statements and transaction records clearly showing that she had been in Tennessee at the time of the offence. Several transactions were recorded there while the fraudster was active in Fargo. By that point, she had lost her flat, her car and her dog.

AI can fabricate

The case raises a fundamental question: how reliable is AI facial recognition in the first place? It also forces a broader debate about the role such technology should play in a state governed by the rule of law.

In Germany, the use of AI facial recognition in criminal investigations remains tightly restricted. The Federal Criminal Police Office (Bundeskriminalamt, BKA) has acknowledged, however, that its use is increasing. At present, automated mass comparisons with publicly available images on the internet, as reportedly used in the Lipps case in the United States, are not permitted in Germany. The federal government has nonetheless introduced draft legislation that would allow both the BKA and the Federal Police to carry out such searches in future.

https://twitter.com/VoicesUnheard/status/2032830728388399502

The systems are far from infallible. In the United States, the Lipps case is not an isolated one. According to the technology magazine IBS Times, it marks the eighth documented wrongful arrest linked to an error in AI facial recognition. Critics therefore argue that an algorithmic match alone must never be sufficient grounds for an arrest warrant.

Facial recognition systems rely on machine learning. They analyse biometric features, including the distances between the eyes, nose and mouth, and compare them with images stored in databases. Under ideal conditions, modern systems can achieve strikingly high accuracy rates. Studies by the US National Institute of Standards and Technology (NIST) show that some algorithms can reach error rates of below 0.1 per cent when working with high-quality images. Even such a figure would imply that one person in 1,000 could be wrongly identified – an unacceptable outcome.

Ideal conditions exist only in the laboratory

Such numbers tell only part of the story. Optimal conditions can be created in a laboratory with considerable effort. In real-world scenarios, however, surveillance cameras often produce blurred images of poor quality, taken in unfavourable lighting or from awkward angles. Error rates rise significantly as a result. It is well documented that the systems perform less reliably for certain population groups. Numerous studies, including research by the Massachusetts Institute of Technology (MIT) Media Lab, have shown that facial recognition is more prone to errors when identifying women and people with darker skin than when identifying white men.

Law enforcement faces an additional problem: the technology is often trusted blindly. Instead of serving as a lead or a trigger for further investigation, which would be appropriate, the results are treated as evidence in themselves. Investigators rely on algorithmic matches without gathering sufficient additional proof to establish guilt beyond doubt.

The danger lies in misinterpretation. Even the most advanced systems provide probabilities, not certainties. If that distinction is ignored, an overestimated technical tool can quickly lead to serious miscarriages of justice. Not only faces but even simple objects can be misinterpreted. In one case, a pupil’s snack was identified as a weapon by an AI system, and the child was placed in handcuffs. The school’s system had failed repeatedly before.

The boundary to transhumanism

Beyond technical shortcomings, there are legal and ethical concerns. Those affected often have little opportunity to challenge algorithmic decisions or even to become aware of them in time. By the time they do, the damage has already been done. In many countries, clear legal frameworks governing the use of AI in law enforcement are still lacking. The European Union is currently attempting to introduce stricter rules through the AI Act, particularly with regard to biometric surveillance in public spaces.

Firewall or majority? Europe’s conservatives caught in Germany’s political conflict

You might be interested Firewall or majority? Europe’s conservatives caught in Germany’s political conflict

Under controlled conditions, AI can be highly reliable. Yet it is widely known that such systems can hallucinate, making them too error-prone for everyday use. In sensitive areas such as criminal justice, allowing AI to have the final say is not only risky but irresponsible. Used as a tool and applied with care, the technology can undoubtedly be helpful. What it can never do is replace the thorough investigative work of experienced police officers. The case of Angela Lipps demonstrates with unsettling clarity what happens when an algorithmic assumption becomes the basis for a measure that threatens a person’s very existence.

In the end, it was not the police who had to prove their case, but the accused who was forced to prove her innocence. That represents a grave breach of fundamental principles in any state governed by the rule of law. It also highlights the greatest risk posed by such technologies. When their capabilities are overestimated and placed above human judgement, the line into transhumanism is crossed. Machines must never be allowed to decide over human beings. That path would lead to a dystopian threat to both individual freedom and human dignity.