Input your search keywords and press Enter.

Facial recognition to ‘predict criminals’ sparks row over AI bias

A US university’s claim it can use facial recognition to “predict criminality” has renewed debate over racial bias in technology.

Harrisburg University researchers said their software “can predict if someone is a criminal, based solely on a picture of their face”.

The software “is intended to help law enforcement prevent crime”, it said.

But 1,700 academics have signed an open letter demanding the research remains unpublished.

One Harrisburg research member, a former police officer, wrote: “Identifying the criminality of [a] person from their facial image will enable a significant advantage for law-enforcement agencies and other intelligence agencies to prevent crime from occurring.”

The researchers claimed their software operates “with no racial bias”.

But the organisers of the open letter, the Coalition for Critical Technology, said: “Such claims are based on unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years.

“These discredited claims continue to resurface.”

The group points to “countless studies” suggesting people belonging to some ethnic minorities are treated more harshly in the criminal justice system, distorting the data on what a criminal supposedly “looks like”.

University of Cambridge computer-science researcher Krittika D’Silva, commenting on the controversy, said: “It is irresponsible for anyone to think they can predict criminality based solely on a picture of a person’s face.

“The implications of this are that crime ‘prediction’ software can do serious harm – and it is important that researchers and policymakers take these issues seriously.

“Numerous studies have shown that machine-learning algorithms, in particular face-recognition software, have racial, gendered, and age biases,” she said, such as a 2019 study indicating facial-recognition works poorly on women and older and black or Asian people.

In the past week, one example of such a flaw went viral online, when an AI upscaler that “depixels” faces turned former US President Barack Obama white in the process. –