eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

228
active users

#predictivepolicing

0 posts0 participants0 posts today
Replied in thread

@tg9541 @mattotcha

#UKpol #UKpolitics
#Precrime #ThoughtCrime #FreeSpeech #PeacefulProtest
#CivilRights #Legal

👉A friendly warning to the #Starmer Government👈

(3/n)

... advent of #PredictivePolicing and the continuing crackdown on the right to #PeacefulProtest in the #UK, it seems that the #Starmer government seems to be following down that road.

👉The despicable use of anti-terror force by 30 #policemen in a place of #worship in #London against six young women👈 discussing...

‘Predictive’ policing tools in France are flawed, opaque, and dangerous.

A new report from @LaQuadrature, now available in English as part of a Statewatch-coordinated project, lays out the risks in detail.

The report finds that these systems reinforce discrimination, evade accountability, and threaten fundamental rights. La Quadrature is calling for a full ban—and we support them.

📄 Read more and access the full report: statewatch.org/news/2025/may/f

Continued thread

Wie Algorithmen in #Deutschland Straftaten „voraussehen“ sollen #PredictivePolicing

"In dem Bericht „Automating Injustice“ werden ausgewählte Systeme untersucht, die in Deutschland von der Polizei, Strafverfolgungsbehörden und Gefängnissen entwickelt oder eingesetzt werden. Außerdem werden öffentlich zugängliche Informationen über solche Praktiken analysiert, um zu erklären, wie die Systeme funktionieren, welche Daten sie verwenden, weshalb sie zu einer stärkeren Diskriminierung führen können und generell eine Gefahr für die Grundrechte sind......."

algorithmwatch.org/de/predicti via @algorithmwatch

AlgorithmWatchAutomatisierte Polizeiarbeit: Wie Algorithmen in Deutschland Straftaten „voraussehen“ sollen - AlgorithmWatchDie Polizei, Strafverfolgungsbehörden und Justizvollzugsanstalten in Deutschland versuchen immer stärker, Straftaten digital „vorherzusagen“ und zu „verhindern“. Der Bericht „Automating Injustice“ gibt einen Überblick über solche algorithmischen Systeme, die in Deutschland entwickelt und eingesetzt werden.

"Alexander, more than midway through a 20-year prison sentence on drug charges, was making preparations for what he hoped would be his new life. His daughter, with whom he had only recently become acquainted, had even made up a room for him in her New Orleans home.

Then, two months before the hearing date, prison officials sent Alexander a letter informing him he was no longer eligible for parole.

A computerized scoring system adopted by the state Department of Public Safety and Corrections had deemed the nearly blind 70-year-old, who uses a wheelchair, a moderate risk of reoffending, should he be released. And under a new law, that meant he and thousands of other prisoners with moderate or high risk ratings cannot plead their cases before the board. According to the department of corrections, about 13,000 people — nearly half the state’s prison population — have such risk ratings, although not all of them are eligible for parole.

Alexander said he felt “betrayed” upon learning his hearing had been canceled. “People in jail have … lost hope in being able to do anything to reduce their time,” he said.

The law that changed Alexander’s prospects is part of a series of legislation passed by Louisiana Republicans last year reflecting Gov. Jeff Landry’s tough-on-crime agenda to make it more difficult for prisoners to be released."

propublica.org/article/tiger-a

ProPublicaAn Algorithm Deemed This Nearly Blind 70-Year-Old Prisoner a “Moderate Risk.” Now He’s No Longer Eligible for Parole.
More from ProPublica

"The UK government is developing a “murder prediction” programme which it hopes can use personal data of those known to the authorities to identify the people most likely to become killers.

Researchers are alleged to be using algorithms to analyse the information of thousands of people, including victims of crime, as they try to identify those at greatest risk of committing serious violent offences.

The scheme was originally called the “homicide prediction project”, but its name has been changed to “sharing data to improve risk assessment”. The Ministry of Justice hopes the project will help boost public safety but campaigners have called it “chilling and dystopian”."

theguardian.com/uk-news/2025/a

The Guardian · UK creating ‘murder prediction’ tool to identify people most likely to killBy Vikram Dodd

Very proud to have a chapter in this handbook! Many thanks to Nathalie Smuha for the invitation!

One of the pertinent questions I ask in the conclusion is: Should the money that is invested in predictive policing applications not be invested instead in tackling causes of crime and in problem-oriented responses, such as mentor programs, youth sports programs, and community policing, as they can be a more effective way to prevent crime? #AI #Policing #Predictivepolicing cambridge.org/core/books/cambr

Cambridge CoreLegal, Ethical, and Social Issues of AI and Law Enforcement in Europe (Chapter 18) - The Cambridge Handbook of the Law, Ethics and Policy of Artificial IntelligenceThe Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence - February 2025

Perils of predictive policing

Amnesty publishes a report warning of the perils of predictive policing

February 2025

Many TV detective series have technology at their core as our heroes vigorously pursue the wrongdoers. CCTV cameras are scrutinised for movements of the criminals, DNA evidence is obtained and of course fingerprints are taken. The story lines of countless detective series feature forensic evidence as a key component of police detection. The series and stories are reassuring by displaying law enforcement officers using all the techniques – scientific and technological – to keep us all safe and lock up the bad guys. Using science and algorithms to enable police forces to predict crime must be a good idea surely?

It is not. The Amnesty report, and other research, explain in great detail the problems and what the risks are. One of the persistent biases in the justice system is racism and it would be worth reading the book The Science of Racism by Keon West (Picador, pub. 2025). The author takes the reader through copious peer reviewed research conducted over many years in different countries explaining the extent of racism. Examples include many cv studies (US: resume) where identical cv’s, but with different names which indicate the ethnicity of candidates, produces markedly different results. There are similar examples from the world of medicine and academia. Racism is endemic and persists. As Keon West acknowledges, a similar book could be written about how women are treated differently.

The Amnesty report notes that Black people are twice as likely to be arrested; three times as likely to be subject to force and 4 times as likely to be subject to stop and search as white people. With such bias in place, the risk is that predictive policing might simply perpetuate existing prejudice and bias. The concern partly centres around the use of skin colour, where people live and their socio-economic background all used as predictive tools.

People have a deep faith in technology. On a recent Any Answers? programme (on BBC Radio 4), a debate about the death penalty and the problem of mistakes, several people showed a touching faith in DNA in particular inferring that mistakes cannot happen. People are mesmerised by the white suited forensic officers on television giving a sense of science and certainly. Technology is only as good as the human systems which use it however. There have been many wrongful arrests and prison sentences of innocent people despite DNA, fingerprints, CCTV and all the rest. Mistakes are made. The worry is that predictive policing could enhance discrimination.

People who are profiled have no way of knowing that they have been. There is a need to publish details of what systems the police and others are using. The police are reluctant to do this the report notes. What is the legal basis for effectively labelling people because of their skin colour, where they live and their socio-economic status?

The police are keen on the idea and around 45 forces use it. The evidence for its effectiveness is doubtful. The risks are considerable.

Previous

Amnesty's new report shows that the police are supercharging racism through predictive policing.

At least 33 UK police forces have used prediction or profiling tools.

“These systems are developed and operated using data from policing and the criminal legal system. That data reflects the structural and institutional racism and discrimination in policing and the criminal legal system.”

#policing #police #precrime #predictivepolicing #codedbias #ukpolitics #ukpol

theguardian.com/uk-news/2025/f

The Guardian · UK use of predictive policing is racist and should be banned, says AmnestyBy Vikram Dodd

"The main goal of this chapter is to introduce one type of AI used for law enforcement, namely predictive policing, and to discuss the main legal, ethical, and social concerns this raises. In the last two decades, police forces in Europe and in North America have increasingly invested in predictive policing applications. Two types of predictive policing will be discussed: predictive mapping and predictive identification. After discussing these two practices and what is known about their effectiveness, I discuss the legal, ethical, and social issues they raise, covering aspects relating to their efficacy, governance, and organizational use, as well as the impact they have on citizens and society."

cambridge.org/core/books/cambr

Cambridge CoreLegal, Ethical, and Social Issues of AI and Law Enforcement in Europe (Chapter 18) - The Cambridge Handbook of the Law, Ethics and Policy of Artificial IntelligenceThe Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence - February 2025
#EU#Europe#AI

"For years, the Pasco Sheriff ran an unconstitutional program, harassing kids and their parents because a glorified Excel spreadsheet predicted they would commit future crimes,” said IJ Senior Attorney Rob Johnson, “Today the Sheriff acknowledged that dystopian program violated the #Constitution and agreed never to bring it back.” #predictivepolicing ij.org/press-release/case-clos

"For years, the Pasco Sheriff ran an unconstitutional program, harassing kids and their parents because a glorified Excel spreadsheet predicted they would commit future crimes,” said IJ Senior Attorney Rob Johnson, “Today the Sheriff acknowledged that dystopian program violated the #Constitution and agreed never to bring it back.” #predictivepolicing ij.org/press-release/case-clos