AI chatbots are writing police reports, raising concerns about accuracy and bias
Police departments in the United States are currently exploring the use of AI chatbots to generate incident reports, according to ABC News. The technology, designed to save time, is raising questions about its reliability and legal usability.
The Oklahoma City Police Department is testing “Draft One,” an AI system developed by Axon based on ChatGPT technology. The chatbot can create a report within seconds using audio recordings from body cameras. Police officer Matt Gilmore was impressed by the results, stating, “It was a better report than I could have ever written, and it was 100% accurate. It flowed better.”
While advocates praise the potential time savings, critics express concerns about possible errors and biases in the AI. Local activist Aurelius Francisco warns that the technology is provided by the same company that supplies tasers to the department, is “alarming enough.” He fears that automation will “ease the police’s ability to harass, surveil and inflict violence on community members.”
No nationwide guidelines for AI use yet
Currently, Oklahoma City is only using the technology for minor incidents without arrests. However, in other cities like Lafayette, Indiana, its use is less restricted. Police expert Andrew Ferguson urges caution, stating, “I am concerned that automation and the ease of the technology would cause police officers to be sort of less careful with their writing.” He emphasizes the importance of accurate police reports for legal decisions.
Ad
Axon CEO Rick Smith sees great potential in the technology, saying, “they become police officers because they want to do police work, and spending half their day doing data entry is just a tedious part of the job that they hate.” According to Smith, Draft One has received the most positive reactions among all of Axon’s products. However, he notes that officers would still be responsible for their reports.