Back

 Industry News Details

 
AI Ethics Coalition Condemn Criminality Prediction Algorithms Posted on : Jun 27 - 2020

On Tuesday, a number of AI researchers, ethicists, data scientists, and social scientists released a blog post arguing that academic researchers should stop pursuing research that endeavors to predict the likelihood that an individual will commit a criminal act, as based upon variables like crime statistics and facial scans.

The blog post was authored by the Coalition for Critical Technology, who argued that the utilization of such algorithms perpetuates a cycle of prejudice against minorities. Many studies of the efficacy of face recognition and predictive policing algorithms find that the algorithms tend to judge minorities more harshly, which the authors of the blog post argue is due to the inequities in the criminal justice system. The justice system produces biased data, and therefore the algorithms trained on this data propagate those biases, the Coalition for Critical Technology argues. The coalition argues that the very notion of “criminality” is often based on race, and therefore research done on these technologies assumes the neutrality of the algorithms when in truth no such neutrality exists.

As The Verge reports, a large publisher of academic works, Springer, was planning on publishing a study entitled “A Deep Neural Network Model to Predict Criminality using Image Processing”. The authors of the study claimed to have engineered a facial recognition algorithm capable of predicting the chance that an individual would commit a crime with no bias and approximately 80% accuracy. Yet the Coalition for Critical Technology penned an open letter to Springer, urging that the publisher refrains from publishing the study or future studies involving similar research.

“The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world,” argues the coalition.

Springer stated that it would not be publishing the paper, as reported by MIT Technology Review. Springer stated that the paper was submitted for an upcoming conference, but after the peer review process the paper was rejected for publication.

The Coalition for Critical Technology argues that the criminality prediction paper is just a single instance for larger, harmful trend where AI engineers and researchers try to predict behavior based on data comprised of sensitive, socially constructed variables. The coalition also argues that much of the research is based on scientifically dubious ideas and theories, which are not supported by the available evidence in biology and psychology. As an example, researchers from Princeton and Google published an article warning that algorithms claiming to be able to predict criminality based on facial features are based on discredited and dangerous pseudosciences like physiognomy.  The researchers warned against letting machine learning be used to reinvigorate long-debunked theories used to support racist systems.

The recent momentum of the Black Lives Matter movement has prompted many companies utilizing facial recognition algorithms to re-evaluate their use of these systems. Research has found that these algorithms are frequently biased, based on non-representative, biased training data.

The signatories of the letter, in addition to arguing that AI researchers should forgo research on criminality prediction algorithms, they have also recommended that researchers re-evaluate how the success of AI models is judged. The coalition members recommend that the societal impact of algorithms should be a metric for success, in addition to metrics like precision, recall, and accuracy. As the authors of the paper write:

“If machine learning is to bring about the ‘social good’ touted in grant proposals and press releases, researchers in this space must actively reflect on the power structures (and the attendant oppressions) that make their work possible.” Source