Originally published at: Pluralistic: AI software assistants make the hardest kinds of bugs to spot (04 Aug 2025) – Pluralistic: Daily links from Cory Doctorow
Today's links
- AI software assistants make the hardest kinds of bugs to spot: Errors that are tuned to be statistically indistinguishable from correct code.
- Hey look at this: Delights to delectate.
- Object permanence: A universal remote for killing people; Win10 spies out of the box; MacOS firmware worm; Tor exit-node subpoena, 3D tube-station maps, Papercraft Addams Family house.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
AI software assistants make the hardest kinds of bugs to spot (permalink)
It's easy to understand why some programmers love their AI assistants and others loathe them: the former group get to decide how and when they use AI tools, while the latter has AI forced upon them by bosses who hope to fire their colleagues and increase their workload.
Formally, the first group are "centaurs" (people assisted by machines) and the latter are "reverse-centaurs" (people conscripted into assisting machines):
https://pluralistic.net/2025/05/27/rancid-vibe-coding/#class-war
Most workers have parts of their jobs they would happily automate away. I know of a programmer who uses AI to take a first pass at CSS code for formatted output. This is a notoriously tedious chore, and it's not hard to determine whether the AI got it right – just eyeball the output in a variety of browsers. If this was a chore you hated doing and someone gave you an effective tool to automate it, that would be cause for celebration. What's more, if you learned that this was only reliable for a subset of cases, you could confine your use of the AI to those cases.
Likewise, many workers dream of doing something through automation that is so expensive or labor-intensive that they can't possibly do it. I'm thinking here of the film editor who extolled the virtues to me of deepfaking the eyelines of every extra in a crowd scene, which lets them change the focus of the whole scene without reassembling a couple hundred extras, rebuilding the set, etc. This is a brand new capability that increases the creative flexibility of that worker, and no wonder they love it. It's good to be a centaur!
Then there's the poor reverse-centaurs. These are workers whose bosses have saddled them with a literally impossible workload and handed them an AI tool. Maybe they've been ordered to use the tool, or maybe they've been ordered to complete the job (or else) by a boss who was suggestively waggling their eyebrows at the AI tool while giving the order. Think of the freelance writer whom Hearst tasked with singlehandedly producing an entire, 64-page "best-of" summer supplement, including multiple best-of lists, who was globally humiliated when his "best books of the summer" list was chock full of imaginary books that the AI "hallucinated":
No one seriously believes that this guy could have written and fact-checked all that material by himself. Nominally, he was tasked with serving as the "human in the loop" who validated the AI's output. In reality, he was the AI's fall-guy, what Dan Davies calls an "accountability sink," who absorbed the blame for the inevitable errors that arise when an employer demands that a single human sign off on the products of an error-prone automated system that operates at machine speeds.
It's never fun to be a reverse centaur, but it's especially taxing to be a reverse centaur for an AI. AIs, after all, are statistical guessing programs that infer the most plausible next word based on the words that came before. Sometimes this goes badly and obviously awry, like when the AI tells you to put glue or gravel on your pizza. But more often, AI's errors are precisely, expensively calculated to blend in perfectly with the scenery.
AIs are conservative. They can only output a version of the future that is predicted by the past, proceeding on a smooth, unbroken line from the way things were to the way they are presumed to be. But reality isn't smooth, it's lumpy and discontinuous.
Take the names of common code libraries: these follow naming conventions that make it easy to predict what a library for a given function will be, and to guess what a given library does based on its name. But humans are messy and reality is lumpy, so these conventions are imperfectly followed. All the text-parsing libraries for a programming language may look like this: docx.text.parsing; txt.text.parsing, md.text.parsing, except for one, which defies convention by being named text.odt.parsing. Maybe someone had a brainfart and misnamed the library. Maybe the library was developed independently of everyone else's libraries and later merged. Maybe Mercury is in retrograde. Whatever the reason, the world contains many of these imperfections.
Ask an LLM to write you some software and it will "hallucinate" (that is, extrapolate) libraries that don't exist, because it will assume that all text-parsing libraries follow the convention. It will assume that the library for parsing odt files is called "odt.text.parsing," and it will put a link to that nonexistent library in your code.
This creates a vulnerability for AI-assisted code, called "slopsquatting," whereby an attacker predicts the names of libraries AIs are apt to hallucinate and creates libraries with those names, libraries that do what you would expect they'd do, but also inject malicious code into every program that incorporates them:
https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
This is the hardest type of error to spot, because the AI is guessing the statistically most plausible name for the imaginary library. It's like the AI is constructing one of those spot-the-difference image puzzles on super-hard mode, swapping the fork and knife in a diner's hands from left to right and vice-versa. You couldn't generate a harder-to-spot bug if you tried.
It's not like people are very good at supervising machines to begin with. "Automation blindness" is what happens when you're asked to repeatedly examine the output of a generally correct machine for a long time, and somehow remain vigilant for its errors. Humans aren't really capable of remaining vigilant for things that don't ever happen – whatever attention and neuronal capacity you initially devote to this never-come eventuality is hijacked by the things that happen all the time. This is why the TSA is so fucking amazing at spotting water-bottles on X-rays, but consistently fails to spot the bombs and guns that red team testers smuggle into checkpoints. The median TSA screener spots a hundred water bottles a day, and is (statistically) never called upon to spot something genuinely dangerous to a flight. They have put in their 10,000 hours, and then some, on spotting water bottles, and approximately zero hours on spotting stuff that we really, really don't want to see on planes.
So automation blindness is already going to be a problem for any "human in the loop," from a radiologist asked to sign off on an AI's interpretation of your chest X-ray to a low-paid overseas worker remote-monitoring your Waymo…to a programmer doing endless, high-speed code-review for a chatbot.
But that coder has it worse than all the other in-looped humans. That coder doesn't just have to fight automation blindness – they have to fight automation blindness and spot the subtlest of errors in this statistically indistinguishable-from-correct code. AI's are basically doing bug steganography, smuggling code defects in by carefully blending them in with correct code.
At code shops around the world, the reverse centaurs are suffering. A survey of Stack Overflow users found that AI coding tools are creating history's most difficult-to-discharge technology debt in the form of "almost right" code full of these fiendishly subtle bugs:
As Venturebeat reports, while usage of AI coding assistants is up (from 76% last year to 84% this year), trust in these tools is plummeting – 33%, with no bottom in sight. 45% of coders say that debugging AI code takes longer than writing the code without AI at all. Only 29% of coders beleive that AI tools can solve complex code problems.
Venturebeat concludes that there are code shops that "solve the 'almost right' problem" and see real dividends from AI tools. What they don't say is that the coders for whom "almost right" isn't a problem are centaurs, not reverse centaurs. They are in charge of their own production and tooling, and no one is using AI tools as a pretext for a relentless hurry-up amidst swingeing cuts to headcount.
The AI bubble is driven by the promise of firing workers and replacing them with automation. Investors and AI companies are tacitly (and sometimes explicitly) betting that bosses who can fire a worker and replace them with a chatbot will pay the chatbot's maker an appreciable slice of that former worker's salary for an AI that takes them off the payroll.
The people who find AI fun or useful or surprising are centaurs. They're making automation choices based on their own assessment of their needs and the AIs' capabilities.
They are not the customers for AI. AI exists to replace workers, not empower them. Even if AI can make you more productive, there is no business model in increasing your pay and decreasing your hours.
AI is about disciplining labor to decrease its share of an AI-using company's profits. AI exists to lower a company's wage-bill, at your expense, with the savings split between the your boss and an AI company. When Getty or the NYT or another media company sues an AI company for copyright infringement, that doesn't mean they are opposed to using AI to replace creative workers – they just want a larger slice of the creative workers' salaries in the form of a copyright license from the AI company that sells them the worker-displacing tool.
They'll even tell you so. When the movie studios sued Midjourney, the RIAA (whose most powerful members are subsidiaries of the same companies that own the studios) sent out this press statement, attributed to RIAA CEO Mitch Glazier:
There is a clear path forward through partnerships that both further AI innovation and foster human artistry. Unfortunately, some bad actors – like Midjourney – see only a zero-sum, winner-take-all game.
Get that? The problem isn't that Midjourney wants to replace all the animation artists – it's that they didn't pay the movie studios license fees for the training data. They didn't create "partnerships."
Incidentally: Mitch Glazier's last job was as a Congressional staffer who slipped an amendment into must-pass bill that killed musicians' ability to claim the rights to their work back after 35 years through "termination of transfer." This was so outrageous that Congress held a special session to reverse it and Glazier lost his job.
Whereupon the RIAA hired him to run the show.
AI companies are not pitching a future of AI-enabled centaurs. They're colluding with bosses to build a world of AI-shackled reverse centaurs. Some people are using AI tools (often standalone tools derived from open models, running on their own computers) to do some fun and exciting centaur stuff. But for the AI companies, these centaurs are a bug, not a feature – and they're the kind of bug that's far easier to spot and crush than the bugs that AI code-bots churn out in volumes no human can catalog, let alone understand.
(Image: Cryteria, CC BY 3.0, modified)
Hey look at this (permalink)
- How 'paper terrorism' hijacked a state system with bogus claims https://www.latimes.com/california/story/2025-07-30/fake-filings-real-consequences-how-paper-terrorism-is-burying-a-state-system-with-bogus-claims
-
The Right of Return, from Arkansas to Israel https://coreyrobin.com/2025/08/02/the-right-of-return-from-arkansas-to-israel/
-
We Need a Strategy to Win Zohran’s Agenda. Call It Plan Z. https://jacobin.com/2025/08/zohran-bernie-party-power-strategy/
-
Bottom of the Rabbit Hole https://welovetheswitch.com/theswitch/episodes/
-
The Ancient Order of Bali https://www.damninteresting.com/the-ancient-order-of-bali/
Object permanence (permalink)
#20yrsago Napster loses $20MM on $21MM revenue in Q1 05 https://web.archive.org/web/20051112133158/http://www.macworld.co.uk/news/index.cfm?RSS&NewsID=12264
#10yrsago NSA conducted commercial espionage against Japanese government and businesses https://www.washingtonpost.com/world/asia_pacific/wikileaks-says-us-spied-on-another-ally—-this-time-japan/2015/07/31/893e1207-9b4e-4c88-80ad-c8eb79f8df2e_story.html
#10yrsago Windows 10 defaults to keylogging, harvesting browser history, purchases, and covert listening https://www.bgr.com/general/windows-10-upgrade-spying-how-to-opt-out/
#10yrsago UK ECHELON journalist: “Snowden proved spies need accountability” https://theintercept.com/2015/08/03/life-unmasking-british-eavesdroppers/
#10yrsago David Cameron will publish the financial details and viewing habits of all UK porn-watchers https://www.theguardian.com/culture/2015/jul/30/cameron-promises-action-to-restrict-under18s-accessing-pornography
#10yrsago Proof-of-concept firmware worm targets Apple computers https://www.wired.com/2015/08/researchers-create-first-firmware-worm-attacks-macs/
#10yrsago What happened when we got subpoenaed over our Tor exit node https://memex.craphound.com/2015/08/04/what-happened-when-we-got-subpoenaed-over-our-tor-exit-node/
#10yrsago EFF and coalition announce new Do Not Track standard for the Web https://www.eff.org/press/releases/coalition-announces-new-do-not-track-standard-web-browsing
#10yrsago Trophy hunting is “hunting” the way that Big Thunder Mountain is a “train ride” https://www.flickr.com/photos/bar-art/20287603731/
#10yrsago 3D maps of London Underground stations https://www.ianvisits.co.uk/articles/3d-maps-of-every-underground-station-hijklm-14683/
#10yrsago Open “Chromecast killer” committed suicide-by-DRM https://www.techdirt.com/2015/08/04/matchstick-more-open-chromecast-destroyed-drm-announces-plans-to-return-all-funds/
#10yrsago Privatized, for-profit immigration detention centers force detainees to work for $1-3/day https://www.latimes.com/nation/immigration/la-na-detention-immigration-workers-20150803-story.html
#10yrsago What happened when we got subpoenaed over our Tor exit node https://memex.craphound.com/2015/08/04/what-happened-when-we-got-subpoenaed-over-our-tor-exit-node/
#10yrsago EFF and coalition announce new Do Not Track standard for the Web https://www.eff.org/press/releases/coalition-announces-new-do-not-track-standard-web-browsing
#10yrsago Trophy hunting is “hunting” the way that Big Thunder Mountain is a “train ride” https://www.flickr.com/photos/bar-art/20287603731/
#10yrsago 3D maps of London Underground stations https://www.ianvisits.co.uk/articles/3d-maps-of-every-underground-station-hijklm-14683/
#10yrsago Open “Chromecast killer” committed suicide-by-DRM https://www.techdirt.com/2015/08/04/matchstick-more-open-chromecast-destroyed-drm-announces-plans-to-return-all-funds/
#10yrsago Privatized, for-profit immigration detention centers force detainees to work for $1-3/day https://www.latimes.com/nation/immigration/la-na-detention-immigration-workers-20150803-story.html
#5yrsago Papercraft Addams Family house https://pluralistic.net/2020/08/04/attack-surface-preview/#neat-sweet-petite
#5yrsago Google acquires major stake in ADT https://pluralistic.net/2020/08/04/attack-surface-preview/#vertical-integration
#5yrsago Jack Dorsey's interop plan https://pluralistic.net/2020/08/04/attack-surface-preview/#jack-giant-killer
#5yrsago Collective Action In Tech For Black Lives Matter https://pluralistic.net/2020/08/04/attack-surface-preview/#anti-racist-tech
#5yrsago Papercraft Addams Family house https://pluralistic.net/2020/08/04/attack-surface-preview/#neat-sweet-petite
#5yrsago Google acquires major stake in ADT https://pluralistic.net/2020/08/04/attack-surface-preview/#vertical-integration
#5yrsago Jack Dorsey's interop plan https://pluralistic.net/2020/08/04/attack-surface-preview/#jack-giant-killer
#5yrsago Collective Action In Tech For Black Lives Matter https://pluralistic.net/2020/08/04/attack-surface-preview/#anti-racist-tech
#5yrsago NSO Group cyberweapons targeted Togo's opposition https://pluralistic.net/2020/08/03/turnkey-authoritarianism/#nso-togo
#5yrsago The sordid tale of We Charity https://pluralistic.net/2020/08/03/turnkey-authoritarianism/#we-charity
#5yrsago A universal remote for killing people https://pluralistic.net/2020/08/03/turnkey-authoritarianism/#minimed
Upcoming appearances (permalink)
- San Diego: ACM Collective Intelligence keynote, Aug 5
https://ci.acm.org/2025/speakers/cory-doctorow/ -
Ithaca: AD White keynote (Cornell), Sep 12
https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ -
DC: Enshittification at Politics and Prose, Oct 8
https://politics-prose.com/cory-doctorow-10825 -
New Orleans: DeepSouthCon63, Oct 10-12, 2025
http://www.contraflowscifi.org/ -
San Francisco: Enshittification at Public Works (The Booksmith), Oct 20
https://app.gopassage.com/events/doctorow25 -
Miami: Enshittification at Books & Books, Nov 5
https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469
Recent appearances (permalink)
- ORG at 20: In conversation with Maria Farrell
https://www.youtube.com/watch?v=M9H2An_D6io -
Why aren't we controlling our own tech? (Co-Op Congress)
https://www.youtube.com/live/GLrDwHgeCy4?si=NUWxPphk0FS_3g9J&t=4409 -
If We Had a Choice, Would We Invent Social Media Again? (The Agenda/TVO)
https://www.youtube.com/watch?v=KJw38uIcmEw
Latest books (permalink)
-
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- Canny Valley: A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
Enshittification: Why Everything Suddenly Got Worse and What to Do About It, Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
Enshittification, Why Everything Suddenly Got Worse and What to Do About It (the graphic novel), Firstsecond, 2026
-
The Memex Method, Farrar, Straus, Giroux, 2026
-
The Reverse-Centaur's Guide to AI, a short book about being a better AI critic, Farrar, Straus and Giroux, 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. (1079 words yesterday, 20530 words total).
-
A Little Brother short story about DIY insulin PLANNING
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X