,

The problems with artificial intelligence predictive policing

On October 14th from 6-8 p.m. the Peel Regional Police held the second of three community consultations on the use of artificial intelligence (AI) in policing. The topic of the second session was “predictive policing and community perspectives on machine learning. The Peel Regional Police have not yet shared specifics on how they hope to use AI.

 “It’s going to be increasingly adopted in a multitude of sectors, including policing. Key is ensuring that we do it right, or do it as right as we can,” said Deputy Chief Anthony Odoardi during the first public consultation. He also stated during both sessions that the consultations are about information, not for or against the use of the technology.

 Both AI and predictive policing are complicated terms with baggage attached. Predictive policing is, essentially, processing and analyzing large amounts of data to determine where crime is most likely to occur and who will either commit or be affected by it. 

One of the session panelists was Dr. Kanika Samuels-Wortley, an expert in race and racism in the criminal justice system. She stated that a nation-wide survey about AI in policing found that predictive systems were the greatest concern Canadians had. 

“Without confidence in the transparency and ethics, even well-intentioned innovations risk being met with resistance or doubt,” Samuels-Wortley said. 

As a whole, the problems with predictive policing can be divided into three main categories: algorithmic bias, the concept of the black box and lack of regulation. AI looks through past data and statistics, using those to come to a decision, often without considering the other factors that went into those numbers. 

Data is not neutral.  Santa Cruz, Chicago and Los Angeles have all had and dropped predictive policing programs because of issues with discrimination and transparency. 

“If algorithms are fed historical data that is replete with individual or institutional biases, and these predictive models risk repeating and reinforcing and further entrenching those biases,” said Odoardi. “We risk hardwiring discrimination, quite literally, into future policing practices.”

Because certain communities have been over-policed or discriminated against in the past or present, they will be overrepresented in the data, which will then loop back around and be repeated. When combined with the lingering idea that AI and data are more reliable than humans, these tools can be used as a way to brush off, intentionally or unintentionally, accusations of racism.

 “Artificial intelligence predictive policing does not increase safety, and rather has far-reaching implications, as what it does is replicate, reinforce and exacerbate the ongoing systemic harms of policing, including systemic discriminatory practices such as racial profiling,” says Andrea Vásquez Jiménez, the director and principal consultant of Policing-Free Schools. “The reality is that the institution of policing, including when it comes to artificial intelligence predictive policing, can not community consult, reform, or guardrail their way out of these systemic harms.” 

Katrina Ingram, founder of Ethically Aligned AI described the issue of algorithmic bias through the idea of someone looking for their keys in the dark, searching in the area illuminated by the streetlight, even though that wasn’t where they were lost. It isn’t a direct one-to-one comparison, but the idea remains. One area is looked at more closely, more things are found there, and it just keeps adding to the problem, creating a feedback loop. These concerns were mentioned in the second session. 

“A key guardrail is going to be audits for bias and discrimination, testing and monitoring to work to identify and mitigate biases where those may exist,” said Odoardi. “Accountability must not vanish into the algorithm.” 

However, cleaning up the data isn’t a straightforward task. This is where both the creator of the AI tool and the issue of the black box become important.Odoardi stated that “The manufacturers of such technologies want to protect their own corporate interests, and so we oftentimes don’t get to see how these algorithms and technologies actually function.” 

To simplify, the idea of the black box means that the people using the AI don’t know why or how it’s doing what it’s doing and that there’s no way for either them or the general population to check. 

Without being able to reliably understand how the recommendations are made, and how they’re acted on, the public may further lose trust in the police. This is particularly relevant in Peel, where the Peel Regional Police do not have a good track record in their interactions with racialized communities. 

Their 2024 Use of Force report was recently released. It states that in 2024, 28 per cent of the times force was used, it was exerted on Black individuals, three times more often than their share of the population. This is down from 34 per cent in 2023.

 In February 2024, after investigating at the request of the Peel Police Services Board, the Canadian Race Relations Foundation released a report stating that “racialized communities in Peel continue to face human rights issues at the hands of Peel Regional Police.” The report speaks about both the disproportionate use of force and the serious issues that Peel has with racial profiling. There have been multiple occasions where the Human Rights Tribunal of Ontario has found that race was a factor in an interaction with the Peel Regional Police caused harm. 

This includes in 2016, when police handcuffed the wrists and ankles of  a six year old Black girl and held her down on her stomach for 28 minutes. Combining police and a tool that are both known to have issues with discrimination risks losing any trust that might have built up and may worsen the issue in new ways. Particularly when there are so few regulations currently in place surrounding artificial intelligence tools. 

“What this means locally, is that despite the amount of community consultations in attempts to showcase the misleading narrative that ‘community’s voice is at the center of decision-making’ by local police boards as a means to rationalize the use of artificial intelligence technologies in policing,” says Jiménez.  “Systemic harms will continue to remain intact, which is compounded by the reality that police are not bound to provincial artificial intelligence accountability measures.” 

Jiménez is referring to Bill 194, the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024. Bill 194 placed requirements on public sector institutions using AI regarding privacy, use of personal information and cybersecurity. However, it makes no mention of human rights, transparency of decision-making or safety. 

The bill applies to public sector entities. However, the way the bill defines the term means that it does not apply to tribunals, courts, or police services. Both the Ontario Human Rights Commission and the Law Commission of Ontario have raised concerns about the gaps in the legislation, including specifically as it applies to police use and predictive policing. 

While community consultations are important, they mean nothing if the recommendations and voices of the community aren’t truly put into practice 

“I think that we have to really pay attention to lived experience. It doesn’t always get captured well in an algorithmic system. They tend to really want to quantify everything,” says Ingram. “People’s lives don’t neatly fit into one singular data point.” 

Artificial intelligence driven predictive policing is unregulated, often untrusted by Canadians, runs on  hidden algorithms, is known to have bias and discrimination issues and would be used within a system that , while working towards fixing them, has severe institutional flaws. Both community consultations have had these risks clearly stated and indicated that the people of Peel are being heard. However, there has not been clarity in how Peel hopes to implement the technology or how the safeguards mentioned as a necessity will actually be implemented.

Skip to content