The self-perpetuating social feedback loops in AI-based predictive policing could stop social evolution

By

A group of well over 1000 academics and researchers calling themselves Coalition for Critical Technology has just published a public letter to academic publisher Springer urging them not to publish a forthcoming article.

The article claims to be able to predict if someone is a criminal based on a picture of their face, with “80 percent accuracy and with no racial bias.”

The extensive, well-referenced response letter titled Abolish the #TechToPrisonPipeline states, in part:

This upcoming publication warrants a collective response because it is emblematic of a larger body of computational research that claims to identify or predict “criminality” using biometric and/or criminal legal data.

Such claims are based on unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years.

Nevertheless, these discredited claims continue to resurface, often under the veneer of new and purportedly neutral statistical methods such as machine learning, the primary method of the publication in question.

In the past decade, government officials have embraced machine learning and artificial intelligence (AI) as a means of depoliticizing state violence and reasserting the legitimacy of the carceral state, often amid significant social upheaval.

Community organizers and Black scholars have been at the forefront of the resistance against the use of AI technologies by law enforcement, with a particular focus on facial recognition.

Yet these voices continue to be marginalized, even as industry and the academy invests significant resources in building out “fair, accountable and transparent” practices for machine learning and AI.

Using historic data perpetuates the past

To be clear, all predictive algorithms and machine learning models are based on historic data.

It is nonsense to claim that there is “no bias” in algorithms that are built on data about outcomes that may have resulted from bias, in this case either from the judicial system, or the different social systems in which individuals are raised and live their lives.

Essentially, all data-based social predictive models can ONLY reinforce and perpetuate existing social systems.

The false promise of objectivity generates vicious feedback loops

AI and data-based algorithms entice politicians and bureaucrats with the promise of “objective” decisions.

Yet if they are broadly implemented they will act as vicious feedback loops, perpetuating every negative aspect of our existing systems, stopping our ability to evolve as a society.

We are far more than the way we look

The other key point about the problematic article is that it claims to predict criminality based solely on a picture of the person’s face.

There are a welter of profound problems with this, not least that it implies there is no free will or ability for people to transcend how they look or were born.

Becoming the society we can be

Fortunately this letter reflects the broad coalition of AI academics who are actively engaged with the many deeply dysfunctional ways that AI can be mis-applied.

It is critical that we don’t build a society so based on historic data that it traps us in who we have been and stop us becoming who we could be.

Image: Cole Henley