
"Many things can go wrong when genAI is used to write narrative police reports. First, because the product relies on body-worn camera audio, there's a big chance of the out of fear that it is unreliable. AI draft missing context like sarcasm, culturally-specific or contextual vocabulary use and slang, languages other than English . While police are expected to edit the AI's version of events to make up for these flaws, many officers will defer to the AI."
"Police are also supposed to make an independent decision before arresting a person who was identified by face recognition-and police mess that up all the time . The prosecutor of King County, Washington, has forbidden local officers from using Draft One Then, of course, there's the matter of dishonesty. Many public defenders and criminal justice practitioners have voiced concerns about what this technology would do to cross examination."
California passed S.B. 524 to begin regulating police reports generated by AI, and the bill awaits the governor's signature. Vendors have heavily invested in selling generative-AI tools to police departments, with Axon's Draft One already bundled with other widely used products. AI-generated drafts can misinterpret audio, miss sarcasm, slang, cultural context, and non-English language, and officers may defer to imperfect AI outputs rather than fully edit them. Face recognition errors compound risks. Prosecutors and public defenders have raised concerns, and some jurisdictions have prohibited use of specific AI drafting tools.
Read at Electronic Frontier Foundation
Unable to calculate read time
Collection
[
|
...
]