How Uncontrolled AI Can Make a Police State
Imagine a world where machines make all the rules, and humans have little say. That’s the potential danger we face if we don’t keep a close eye on AI development. Let’s explore how unchecked AI can lead us down this path
Even though many people don’t agree, European governments want to change the rules about using AI in police work. The European Union is making big decisions about what laws should cover police technology, and it’s one of the most important laws about artificial intelligence in the world.
AI in Police and Surveillance Across Europe
In Europe, police, migration, and security authorities are increasingly turning to AI technology for various purposes. For instance, they plan to use AI-based video surveillance during the 2024 Paris Olympics, and millions of euros from EU funds are being invested in AI-powered surveillance at Europe’s borders. These AI systems are becoming integral parts of the government’s surveillance network.
Moreover, AI is being used to target specific communities. Technologies like predictive policing, while portrayed as tools to combat crime, are built on the assumption that certain groups, especially racialised, migrant, and working-class individuals, are more likely to engage in criminal activities.
In the Netherlands, we’ve witnessed how predictive policing systems can profoundly affect Black and Brown youth. For instance, the Top-600 system, designed to identify potential violent criminals preventively, was found to unfairly target individuals of Moroccan and Surinamese descent, according to investigations.
In the realm of migration, there’s a rising investment in AI tools to predict migration patterns and assess asylum claims in unconventional ways. European Union agencies, like Frontex, which have faced accusations of facilitating the forced return of asylum-seekers from Europe, are exploring AI’s potential to address the “challenge” of increasing migration. This raises concerns that these technologies may be used to predict and hinder movement to Europe, violating the right to seek asylum, which is illegal.
The increasing use of AI in policing and migration contexts has significant implications for racial discrimination and violence. AI technologies could exacerbate structural racism by providing law enforcement with more tools, expanded legal powers, and reduced accountability.
Regulate police AI
There is a growing movement advocating for restrictions on how the government employs technology to surveil, identify, and make decisions about its citizens.
While governments argue that the police need more tools to combat crime and maintain order, important questions arise: Who safeguards us from potential police overreach? Who determines the boundaries of mass surveillance?
And where do we draw the line, especially for migrants and racialised individuals, when increased AI usage translates to more police stops, a higher risk of arrest, and an escalating potential for violence during encounters with law enforcement and border authorities?
Safeguards on state and police authority are crucial for a secure and well-functioning democracy. No institution should possess unchecked power and trust, particularly when they have the means to monitor our actions closely.
Additionally, with the introduction of AI technologies, we are witnessing the infiltration of the private sector into government functions, introducing profit motives into discussions about public safety.
The call for regulating police AI has resonated within the European Parliament. In June of this year, the EU’s democratic branch affirmed the necessity for legal limitations on AI usage by law enforcement and migration control.
The European Parliament’s position advocated for a complete prohibition on facial recognition in public spaces, predictive policing, and an expansion of the list of ‘high-risk’ AI applications in migration control.
However, in the final stages of negotiations (“trilogues”) on the EU AI Act, European governments are contemplating significant reductions in restrictions on law enforcement’s use of AI.
This week, 115 civil society organisations have urged the EU to prioritize safety and human rights over unchecked police authority. They have called for legal boundaries on how the police and migration authorities employ AI, including prohibiting the most detrimental systems such as facial recognition in public spaces, predictive policing, and AI for forecasting and preventing migration patterns.
It is imperative that we are aware of when and where the government utilises AI to observe, evaluate, and differentiate among its citizens. The public should establish limitations on how the police employ technology. Without these boundaries, unrestrained AI adoption may pave the way to a police state.
Conclusion:
In conclusion, the growing demand for limitations on the use of AI in law enforcement and surveillance is a critical conversation for the future of our societies. While governments argue for more technological tools to enhance security, it is equally essential to safeguard individual rights and maintain accountability.
The European Parliament’s recognition of the need for legal constraints on AI use in policing and migration control is a positive step towards ensuring the responsible and ethical deployment of these technologies. However, recent efforts by European governments to scale back these limitations raise concerns about unchecked police power and its potential consequences.
The call from 115 civil society organizations to prioritise safety and human rights over unchecked police authority reflects the broader sentiment that we must strike a balance between security and civil liberties. We need transparency and public involvement to determine the appropriate boundaries for AI use by law enforcement.
Without these essential safeguards and a clear delineation of when and how the state employs AI, there is a genuine risk that unchecked AI will erode individual freedoms and lead us down the path toward a police state. It is crucial for us, as a society, to actively engage in this conversation, set meaningful limits, and ensure that the responsible use of AI technology aligns with our democratic values and respect for human rights.