Tag: AI

Council Takes Up Harrell’s “Inherently Unsustainable” Budget; New Spending Includes $800,000 in Speculative AI Spending

Mayor Bruce Harrell, speaking at AI House in September

1. Your sales taxes are going up next year, thanks to a vote by the City Council Tuesday that approved a 0.1-cent increase that can, in the future, be used for any “public safety” purpose, including programs the city is already funding through its general fund.

The new tax, authorized earlier this year by the state legislature, will add $23.7 million in new funding to the budget to pay for 24 new CARE Team first responders, keep the Law Enforcement Assisted Diversion program going, and fund treatment, firefighters, and other non-police public safety programs. It also includes $15 million to supplant general fund spending on CARE, giving the city $15 million more to use on any purpose.

But, as a City Council central staff memo on the budget notes, there’s nothing in the state authorizing legislation that requires the city to use the new sales tax on new programs. (The original idea behind the legislation was that cities would use the tax increase to pay for police.)

According to the central staff analysis, Mayor Bruce Harrell’s proposed budget is unsustainable and relies heavily on fiscal sleight-of-hand to come up with a balanced budget in 2026, tumbling precipitously into massive deficits in 2027 and beyond. These tricks include relying on a one-time $141 million fund balance left over from 2025, which won’t be there to balance the budget next year; funding programs that will be necessary long-term, like food assistance for people losing federal benefits, with one-time resources, so that they don’t count toward future deficits; and assuming a $10 million “underspend” every year in the future, allowing the mayor’s budget team to chop $10 million off each year’s expenditures automatically without actually making cuts.

Referring to the fund balance, the memo notes, “The Mayor’s reliance on this $141 million one-time resource to balance his proposed spending for 2026 reflects the inherent unsustainability of the 2026 Proposed Budget, and demonstrates the basic magnitude of the mismatch between the City’s expenditures and its reliable, on-going revenues.

This damning assessment by the council’s own central staff could have implications throughout the budget, which the city council will begin discussing in detail today. What it could mean for the public safety sales tax, specifically is that, if the council passes Harrell’s unsustainable budget mostly as-is, future councils (and a potential future mayor Katie Wilson) could choose to use the money not to fund CARE and LEAD and treatment, but to pay for police, fire, and other basics that would ordinarily be paid for by the general fund.

In other words: Like the JumpStart payroll tax fund, which was supposed to pay for specific program areas (housing, small businesses, Green New Deal, and equitable development), the public safety tax could be used in the future as a slush fund to pay for programs that have historically been funded out of the city’s general budget.

The proposed budget adds about $53 million in new spending compared to the endorsed 2026 budget.

PubliCola is supported entirely by readers like you.
CLICK BELOW to become a one-time or monthly contributor.

Support PubliCola

 

2. One of the new initiatives Harrell’s proposed 2026 budget would fund is Permitting Accountability and Customer Trust (PACT) program—an $800,000 proposal that will purportedly “streamline the permitting application process and improve customer services using Artificial Intelligence and data integration.”

Callie Craighead, a spokeswoman for the mayor, told PubliCola the city hasn’t picked a vendor for the PACT funding yet. “The integration of AI tools is part of the City’s most concerted effort to date to reduce permitting time, making it faster and easier to build housing across Seattle,” she said.

Harrell is all-in on AI; at an event at the startup incubator AI House last month, he told the crowd, “If you’re thinking, ‘Maybe there’s an opportunity to monetize these things the city’s working on, that’s fair game, by the way. Faster permits—we know that AI can play an incredible role there. …  Time is money, and to the extent we can reduce permit processing times, this would be an added benefit for everyone involved in that process.”

Craighead said the new “AI tools” will help permit applicants catch errors before they submit applications; help “staff apply City code more consistently and efficiently, [and help] the City find opportunities to simplify and streamline policies.”

There are some companies that claim to reduce permitting times using AI chatbots and near-instant plan reviews, but it’s unclear to what extent these tools can actually supplant the human workers who currently work with developers and homeowners on permits and ensure compliance with the city’s complex codes by, for instance, talking to people and answering questions directly and inspecting conditions on the ground.

Moving away from actual employees to tools created by AI startups—a change the city’s new AI plan refers to delicately as “workforce transition”—will face strong opposition from the city’s unions (the largest of which, PROTEC17, has thrown its weight behind Harrell’s opponent Wilson), and potential opposition from the public as well. Replacing public workers with software could also have implications for the local economy, which is increasingly tilted in favor of wealthy tech-sector workers. And, of course, the current frenzy of AI hype could turn out to be just that—hype.

The city’s new AI plan says the “City’s AI Proof of Value framework ensures pilots are judged on clear objectives, business value, responsible use, and long-term supportability, not hype-fueled adoption we hear from sales staff.” Which seems, I don’t know… a little doth-protest-too much?

Police Department Acknowledges Using AI, But Says It Isn’t “Substantive” Enough to Label

By Erica C. Barnett

A recent complaint alleging that the Seattle Police Department used generative AI without attribution, in violation of the city’s AI policy, has been referred by the Office of Police Accountability as a supervisor action—“a minor policy violation or performance issue that is best addressed through training, communication, or coaching by the employee’s supervisor.”

The complaint, which is anonymous, alleged that a number of public statements from SPD—including an August blog post about recent shootings, an April statement from SPD Chief Shon Barnes about a new “Immediate Violent Crime Prevention & Enforcement Plan,” and a blog post about Barnes’ confirmation in July—were created with a generative AI tool such as ChatGPT.

According the widely used GPTZero AI detector, the August blog post is likely “100 percent AI”-generated, as is the April statement from Barnes; the July blog post appeared to be a mix of AI and human inputs, according to GPTZero, with 29 of its 42 sentences “likely AI generated.” ZeroGPT, another AI detector, found similar results, except that it was more confident that most of the July post was AI-generated.

As a baseline, I checked PubliCola’s last several posts using both AI detectors; both found them to be 100 percent human-generated.

Since 2023, the city has had a policy on generative AI that requires city departments to label AI-generated text.”If text generated by an AI system is used substantively in a final product, attribution to the relevant AI system is required,” the policy said. According to IT Department spokeswoman Megan Erb, city departments are supposed to “determine their standard for substantive use in line with the AI policy principles and relevant intellectual property laws.”

None of SPD’s communications have been labeled to indicate they were produced with AI.

In April, DivestSPD and other outlets reported that OPA recommended SPD come up with its own AI policy after discovering that a sergeant was using ChatGPT to generate reports. OPA said it could not comment on the complaint alleging AI use by the communications team, and SPD did not respond to questions about that recommendation. Currently, SPD does not have its own AI policy.

Last week, Mayor Bruce Harrell and the city’s IT Department director Rob Lloyd announced a new citywide AI policy aimed chiefly at allowing AI pilots to help automate city functions like permitting (a prospect that raises unrelated, but serious, questions about the human labor force doing many jobs that the city may eventually replace with AI.) When it comes specifically to using generative AI to produce text-based documents, however, the new policy is identical to the old one.

Lloyd said there “aren’t any penalties, per se,” for departments that misuse AI tools, “but you do have to go through a rigorous process.”

City departments are required to get permission to use AI systems, including free software such as ChatGPT that poses potential privacy risks. Erb told PubliCola that “SPD was authorized to use specific generative AI applications under City policy following a standard security and privacy review.” (We’ve followed up for more details on which applications SPD is authorized to use).

The city’s generative AI policy does not set specific thresholds for what constitutes “substantive” use of AI-generated text, leaving the term open to interpretation. According to a spokesperson for SPD, the department “has not used generative AI in any substantive way as part of its communications.”

PubliCola is supported entirely by readers like you.
CLICK BELOW to become a one-time or monthly contributor.

Support PubliCola

 

However, the spokesperson continued, “We are testing use cases, always with a human in the loop. To the limited extent it has been tried, we have explored using it to improve the clarity of existing writing for the public, find ways to get closer to presenting information in plain language, and brainstorm ideas. It is not used as a primary author of content.”

SPD’s legal counsel, Becca Boatright, said that “tools that use AI for grammar, suggested wording changes, suggested brevity/clarity, etc. are not considered ‘generative’ AI for purposes of this policy.”

“Technology is always evolving, and like laptops, social media, and spellchecking tools, AI is another tool in our toolbox to evolve communications, especially given staffing levels and our commitment to share information that educates residents,” the SPD spokesperson said. “It can help do tasks for experienced individuals, allowing them to dedicate more of their time to other responsibilities that align with SPD’s mission and values.”

Because the OPA complaint has been referred as a “supervisor action,” it’s likely that SPD’s Chief Communications Officer Barbara DeLollis, will decide whether and how to respond to the issues it raises about the use of AI by her own office. SPD did not respond to PubliCola’s question about whether the department will take any action to address the issues raised in the recent OPA complaint.

AI detectors like GPTZero are not infallible. They use large datasets, including both AI- and human-generated text, to analyze patterns that indicate the likelihood that a text was AI-generated. Signs that a document was generated by, or with the help of, AI, include buzzwords or repetitive phrases, uniform sentence structure and length, predictable formatting (such as bullet-pointed lists and frequent use of em-dashes), frequent use of passive voice, and an excessively formal or robotic tone.

Here, for example, is the conclusion of the statement from Barnes the AI detector determined was 100 percent AI-generated, which featured a bullet-pointed list: “Public safety is not just about enforcement—it’s about collaboration. The support of our city officials, and our community is vital in ensuring we create long-term, sustainable solutions. I appreciate our ongoing partnerships and look forward to working together to build a safer Seattle.”

And here are the first two paragraphs of the August post about gun violence, which the two AI detectors also suggested was completely AI-generated:

Over the past four days, the Seattle community tragically experienced three separate incidents of gun violence, resulting in the loss of lives. On Thursday, we were confronted with a targeted homicide occurring in front of a place of worship. While the motive for this premeditated act is still under investigation, we recognize the profound impact it has had on those who witnessed this traumatic event, as well as the broader community.

In the early morning hours of Sunday, two additional homicides occurred. The first stemmed from an unauthorized and unregulated gathering, which culminated in the loss of another community member. Shortly thereafter, a third homicide was reported, involving an individual discovered deceased in a parking lot, potentially linked to a vehicle collision or altercation.”

Of course, humans can also write robotically or use AI-style formatting.

To find out more about SPD’s use of AI, PubliCola has filed a records request seeking all AI inputs and outputs, among other information, produced by department communications staff.