Cops Using AI That Automatically Generates Police Reports From Body Cam Footage
Taser maker and police contractor Axon has announced a new product called “Draft One,” an AI that can generate police reports from body cam audio.
As Forbes reports, it’s a brazen and worrying use of the tech that could easily lead to the furthering of institutional ills like racial bias in the hands of police departments. That’s not to mention the propensity of AI models to “hallucinate” facts, which could easily lead to chaos and baseless accusations.
“It’s kind of a nightmare,” Electronic Frontier Foundation surveillance technologies investigations director Dave Maass told Forbes. “Police, who aren’t specialists in AI, and aren’t going to be specialists in recognizing the problems with AI, are going to use these systems to generate language that could affect millions of people in their involvement with the criminal justice system.”
“What could go wrong?” he pondered.
Axon claims its new AI, which is based on OpenAI’s GPT-4 large language model, can help cops spend less time writing up reports.
“If an officer spends half their day reporting, and we can cut that in half, we have an opportunity to potentially free up 25 percent of an officer’s time to be back out policing,” Axon CEO Rick Smith told Forbes.
But given the sheer propensity of OpenAI’s models to “hallucinate” facts, fail at correctly summarizing information, and replicate the racial biases from their training data, it’s an eyebrow-raising use of the tech.
Axon, however, maintains that it’s adjusted the AI model to ensure it can’t go off the rails.
“The simplest way to think about it is that we have turned off the creativity,” principal AI product manager Noah Spitzer-Williams told Forbes. “That dramatically reduces the number of hallucinations and mistakes… Everything that it’s produced is just based on that transcript and that transcript alone.”
Emphasis on “reduces” — should we really accept the existence of a tool that’s still capable of making “mistakes” when it comes to policing?
As for racial bias, according to Spitzer-Williams, tests showed that there were no “statistically significant differences across races” during testing.
The company is also hoping to ensure that police agencies have reports reviewed by human officers through detailed documentation.
Even with those assurances, however, the collaboration isn’t sitting well with everybody. News of the new product was met with outrage on social media.
“We are just at the beginning of the tech takeover of policing, and we will all suffer for it eventually, but folks of color will suffer first and worst — like always,” artist Reva Russell English tweeted.
“This is going to seriously mess up people’s lives — AI is notoriously error-prone and police reports are official records,” another user wrote. “Can we please get some oversight of these police tech corporations??”
And that’s not to mention Axon’s troubling reputation. In 2022, nine out of the 12 international ethics board members resigned following the announcement — and prompt reversal — of having Taser-wielding drones patrol US schools.
Former employees also told Reuters last year that they were pressured into getting company tattoos, highlighting a bizarre and troubling company culture.
In short, there are numerous reasons to be skeptical of Axon’s latest AI endeavor. Given the stakes involved, relying on unproven and highly untrustworthy tech should give anybody pause — especially when it comes to the already ethically murky police tech industry.
More on Axon: Man Bursts Into Flames After Being Tased by Police