By Erica C. Barnett
A recent complaint alleging that the Seattle Police Department used generative AI without attribution, in violation of the city’s AI policy, has been referred by the Office of Police Accountability as a supervisor action—“a minor policy violation or performance issue that is best addressed through training, communication, or coaching by the employee’s supervisor.”
The complaint, which is anonymous, alleged that a number of public statements from SPD—including an August blog post about recent shootings, an April statement from SPD Chief Shon Barnes about a new “Immediate Violent Crime Prevention & Enforcement Plan,” and a blog post about Barnes’ confirmation in July—were created with a generative AI tool such as ChatGPT.
According the widely used GPTZero AI detector, the August blog post is likely “100 percent AI”-generated, as is the April statement from Barnes; the July blog post appeared to be a mix of AI and human inputs, according to GPTZero, with 29 of its 42 sentences “likely AI generated.” ZeroGPT, another AI detector, found similar results, except that it was more confident that most of the July post was AI-generated.
As a baseline, I checked PubliCola’s last several posts using both AI detectors; both found them to be 100 percent human-generated.
Since 2023, the city has had a policy on generative AI that requires city departments to label AI-generated text.”If text generated by an AI system is used substantively in a final product, attribution to the relevant AI system is required,” the policy said. According to IT Department spokeswoman Megan Erb, city departments are supposed to “determine their standard for substantive use in line with the AI policy principles and relevant intellectual property laws.”
None of SPD’s communications have been labeled to indicate they were produced with AI.
In April, DivestSPD and other outlets reported that OPA recommended SPD come up with its own AI policy after discovering that a sergeant was using ChatGPT to generate reports. OPA said it could not comment on the complaint alleging AI use by the communications team, and SPD did not respond to questions about that recommendation. Currently, SPD does not have its own AI policy.
Last week, Mayor Bruce Harrell and the city’s IT Department director Rob Lloyd announced a new citywide AI policy aimed chiefly at allowing AI pilots to help automate city functions like permitting (a prospect that raises unrelated, but serious, questions about the human labor force doing many jobs that the city may eventually replace with AI.) When it comes specifically to using generative AI to produce text-based documents, however, the new policy is identical to the old one.
Lloyd said there “aren’t any penalties, per se,” for departments that misuse AI tools, “but you do have to go through a rigorous process.”
City departments are required to get permission to use AI systems, including free software such as ChatGPT that poses potential privacy risks. Erb told PubliCola that “SPD was authorized to use specific generative AI applications under City policy following a standard security and privacy review.” (We’ve followed up for more details on which applications SPD is authorized to use).
The city’s generative AI policy does not set specific thresholds for what constitutes “substantive” use of AI-generated text, leaving the term open to interpretation. According to a spokesperson for SPD, the department “has not used generative AI in any substantive way as part of its communications.”
PubliCola is supported entirely by readers like you.
CLICK BELOW to become a one-time or monthly contributor.
However, the spokesperson continued, “We are testing use cases, always with a human in the loop. To the limited extent it has been tried, we have explored using it to improve the clarity of existing writing for the public, find ways to get closer to presenting information in plain language, and brainstorm ideas. It is not used as a primary author of content.”
SPD’s legal counsel, Becca Boatright, said that “tools that use AI for grammar, suggested wording changes, suggested brevity/clarity, etc. are not considered ‘generative’ AI for purposes of this policy.”
“Technology is always evolving, and like laptops, social media, and spellchecking tools, AI is another tool in our toolbox to evolve communications, especially given staffing levels and our commitment to share information that educates residents,” the SPD spokesperson said. “It can help do tasks for experienced individuals, allowing them to dedicate more of their time to other responsibilities that align with SPD’s mission and values.”
Because the OPA complaint has been referred as a “supervisor action,” it’s likely that SPD’s Chief Communications Officer Barbara DeLollis, will decide whether and how to respond to the issues it raises about the use of AI by her own office. SPD did not respond to PubliCola’s question about whether the department will take any action to address the issues raised in the recent OPA complaint.
AI detectors like GPTZero are not infallible. They use large datasets, including both AI- and human-generated text, to analyze patterns that indicate the likelihood that a text was AI-generated. Signs that a document was generated by, or with the help of, AI, include buzzwords or repetitive phrases, uniform sentence structure and length, predictable formatting (such as bullet-pointed lists and frequent use of em-dashes), frequent use of passive voice, and an excessively formal or robotic tone.
Here, for example, is the conclusion of the statement from Barnes the AI detector determined was 100 percent AI-generated, which featured a bullet-pointed list: “Public safety is not just about enforcement—it’s about collaboration. The support of our city officials, and our community is vital in ensuring we create long-term, sustainable solutions. I appreciate our ongoing partnerships and look forward to working together to build a safer Seattle.”
And here are the first two paragraphs of the August post about gun violence, which the two AI detectors also suggested was completely AI-generated:
Over the past four days, the Seattle community tragically experienced three separate incidents of gun violence, resulting in the loss of lives. On Thursday, we were confronted with a targeted homicide occurring in front of a place of worship. While the motive for this premeditated act is still under investigation, we recognize the profound impact it has had on those who witnessed this traumatic event, as well as the broader community.
In the early morning hours of Sunday, two additional homicides occurred. The first stemmed from an unauthorized and unregulated gathering, which culminated in the loss of another community member. Shortly thereafter, a third homicide was reported, involving an individual discovered deceased in a parking lot, potentially linked to a vehicle collision or altercation.”
Of course, humans can also write robotically or use AI-style formatting.
To find out more about SPD’s use of AI, PubliCola has filed a records request seeking all AI inputs and outputs, among other information, produced by department communications staff.

“Of course, humans can also write robotically …” I’ll say. The examples quoted here sound exactly like a lot of standard human-generated public information auto-speak. I would’ve expected more stylistic flourish from AI. Oh well: might as well stick with us humans. Quirks and all.
So much wrong with this but, one thing that struck me, their use of AI is already expressing “feelings” that may not actually represent the attitude of the generating dept. It’s mockery.