Pluralistic: Conspiratorialism as a material phenomenon (29 Oct 2024)

Originally published at: Pluralistic: Conspiratorialism as a material phenomenon (29 Oct 2024) – Pluralistic: Daily links from Cory Doctorow



Today's links



A man lying in a hospital bed, wearing a sinister mind-control helmet. His hands are clenched into fists and he is grimacing. Through a hole in the wall we see a prancing vaudevallian, whose head has been replaced with the head of Mark Zuckerberg's Metaverse avatar. Behind this figure is the giant red eye of HAL9000 from Stanley Kubrick's '2001: A Space Odyssey.' At the end of the bed stand a trio - Mom, Dad and daughter - in Sunday best clothes, their backs to us, staring at the mind-controlled man's face.

Conspiratorialism as a material phenomenon (permalink)

I think it behooves us to be a little skeptical of stories about AI driving people to believe wrong things and commit ugly actions. Not that I like the AI slop that is filling up our social media, but when we look at the ways that AI is harming us, slop is pretty low on the list.

The real AI harms come from the actual things that AI companies sell AI to do. There's the AI gun-detector gadgets that the credulous Mayor Eric Adams put in NYC subways, which led to 2,749 invasive searches and turned up zero guns:

https://www.cbsnews.com/newyork/news/nycs-subway-weapons-detector-pilot-program-ends/

Any time AI is used to predict crime – predictive policing, bail determinations, Child Protective Services red flags – they magnify the biases already present in these systems, and, even worse, they give this bias the veneer of scientific neutrality. This process is called "empiricism-washing," and you know you're experiencing it when you hear some variation on "it's just math, math can't be racist":

https://pluralistic.net/2020/06/23/cryptocidal-maniacs/#phrenology

When AI is used to replace customer service representatives, it systematically defrauds customers, while providing an "accountability sink" that allows the company to disclaim responsibility for the thefts:

https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs

When AI is used to perform high-velocity "decision support" that is supposed to inform a "human in the loop," it quickly overwhelms its human overseer, who takes on the role of "moral crumple zone," pressing the "OK" button as fast as they can. This is bad enough when the sacrificial victim is a human overseeing, say, proctoring software that accuses remote students of cheating on their tests:

https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat

But it's potentially lethal when the AI is a transcription engine that doctors have to use to feed notes to a data-hungry electronic health record system that is optimized to commit health insurance fraud by seeking out pretenses to "upcode" a patient's treatment. Those AIs are prone to inventing things the doctor never said, inserting them into the record that the doctor is supposed to review, but remember, the only reason the AI is there at all is that the doctor is being asked to do so much paperwork that they don't have time to treat their patients:

https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14

My point is that "worrying about AI" is a zero-sum game. When we train our fire on the stuff that isn't important to the AI stock swindlers' business-plans (like creating AI slop), we should remember that the AI companies could halt all of that activity and not lose a dime in revenue. By contrast, when we focus on AI applications that do the most direct harm – policing, health, security, customer service – we also focus on the AI applications that make the most money and drive the most investment.

AI hasn't attracted hundreds of billions in investment capital because investors love AI slop. All the money pouring into the system – from investors, from customers, from easily gulled big-city mayors – is chasing things that AI is objectively very bad at and those things also cause much more harm than AI slop. If you want to be a good AI critic, you should devote the majority of your focus to these applications. Sure, they're not as visually arresting, but discrediting them is financially arresting, and that's what really matters.

All that said: AI slop is real, there is a lot of it, and just because it doesn't warrant priority over the stuff AI companies actually sell, it still has cultural significance and is worth considering.

AI slop has turned Facebook into an anaerobic lagoon of botshit, just the laziest, grossest engagement bait, much of it the product of rise-and-grind spammers who avidly consume get rich quick "courses" and then churn out a torrent of "shrimp Jesus" and fake chainsaw sculptures:

https://www.404media.co/email/1cdf7620-2e2f-4450-9cd9-e041f4f0c27f/

For poor engagement farmers in the global south chasing the fractional pennies that Facebook shells out for successful clickbait, the actual content of the slop is beside the point. These spammers aren't necessarily tuned into the psyche of the wealthy-world Facebook users who represent Meta's top monetization subjects. They're just trying everything and doubling down on anything that moves the needle, A/B splitting their way into weird, hyper-optimized, grotesque crap:

https://www.404media.co/facebook-is-being-overrun-with-stolen-ai-generated-images-that-people-think-are-real/

In other words, Facebook's AI spammers are laying out a banquet of arbitrary possibilities, like the letters on a Ouija board, and the Facebook users' clicks and engagement are a collective ideomotor response, moving the algorithm's planchette to the options that tug hardest at our collective delights (or, more often, disgusts).

So, rather than thinking of AI spammers as creating the ideological and aesthetic trends that drive millions of confused Facebook users into condemning, praising, and arguing about surreal botshit, it's more true to say that spammers are discovering these trends within their subjects' collective yearnings and terrors, and then refining them by exploring endlessly ramified variations in search of unsuspected niches.

(If you know anything about AI, this may remind you of something: a Generative Adversarial Network, in which one bot creates variations on a theme, and another bot ranks how closely the variations approach some ideal. In this case, the spammers are the generators and the Facebook users they evince reactions from are the discriminators)

https://en.wikipedia.org/wiki/Generative_adversarial_network

I got to thinking about this today while reading User Mag, Taylor Lorenz's superb newsletter, and her reporting on a new AI slop trend, "My neighbor’s ridiculous reason for egging my car":

https://www.usermag.co/p/my-neighbors-ridiculous-reason-for

The "egging my car" slop consists of endless variations on a story in which the poster (generally a figure of sympathy, canonically a single mother of newborn twins) complains that her awful neighbor threw dozens of eggs at her car to punish her for parking in a way that blocked his elaborate Hallowe'en display. The text is accompanied by an AI-generated image showing a modest family car that has been absolutely plastered with broken eggs, dozens upon dozens of them.

According to Lorenz, variations on this slop are topping very large Facebook discussion forums totalling millions of users, like "Movie Character…,USA Story, Volleyball Women, Top Trends, Love Style, and God Bless." These posts link to SEO sites laden with programmatic advertising.

The funnel goes:

i. Create outrage and hence broad reach;

ii, A small percentage of those who see the post will click through to the SEO site;

iii. A small fraction of those users will click a low-quality ad;

iv. The ad will pay homeopathic sub-pennies to the spammer.

The revenue per user on this kind of scam is next to nothing, so it only works if it can get very broad reach, which is why the spam is so designed for engagement maximization. The more discussion a post generates, the more users Facebook recommends it to.

These are very effective engagement bait. Almost all AI slop gets some free engagement in the form of arguments between users who don't know they're commenting an AI scam and people hectoring them for falling for the scam. This is like the free square in the middle of a bingo card.

Beyond that, there's multivalent outrage: some users are furious about food wastage; others about the poor, victimized "mother" (some users are furious about both). Not only do users get to voice their fury at both of these imaginary sins, they can also argue with one another about whether, say, food wastage even matters when compared to the petty-minded aggression of the "perpetrator." These discussions also offer lots of opportunity for violent fantasies about the bad guy getting a comeuppance, offers to travel to the imaginary AI-generated suburb to dole out a beating, etc. All in all, the spammers behind this tedious fiction have really figured out how to rope in all kinds of users' attention.

Of course, the spammers don't get much from this. There isn't such a thing as an "attention economy." You can't use attention as a unit of account, a medium of exchange or a store of value. Attention – like everything else that you can't build an economy upon, such as cryptocurrency – must be converted to money before it has economic significance. Hence that tooth-achingly trite high-tech neologism, "monetization."

The monetization of attention is very poor, but AI is heavily subsidized or even free (for now), so the largest venture capital and private equity funds in the world are spending billions in public pension money and rich peoples' savings into CO2 plumes, GPUs, and botshit so that a bunch of hustle-culture weirdos in the Pacific Rim can make a few dollars by tricking people into clicking through engagement bait slop – twice.

The slop isn't the point of this, but the slop does have the useful function of making the collective ideomotor response visible and thus providing a peek into our hopes and fears. What does the "egging my car" slop say about the things that we're thinking about?

Lorenz cites Jamie Cohen, a media scholar at CUNY Queens, who points out that subtext of this slop is "fear and distrust in people about their neighbors." Cohen predicts that "the next trend, is going to be stranger and more violent.”

This feels right to me. The corollary of mistrusting your neighbors, of course, is trusting only yourself and your family. Or, as Margaret Thatcher liked to say, "There is no such thing as society. There are individual men and women and there are families."

We are living in the tail end of a 40 year experiment in structuring our world as though "there is no such thing as society." We've gutted our welfare net, shut down or privatized public services, all but abolished solidaristic institutions like unions.

This isn't mere aesthetics: an atomized society is far more hospitable to extreme wealth inequality than one in which we are all in it together. When your power comes from being a "wise consumer" who "votes with your wallet," then all you can do about the climate emergency is buy a different kind of car – you can't build the public transit system that will make cars obsolete.

When you "vote with your wallet" all you can do about animal cruelty and habitat loss is eat less meat. When you "vote with your wallet" all you can do about high drug prices is "shop around for a bargain." When you vote with your wallet, all you can do when your bank forecloses on your home is "choose your next lender more carefully."

Most importantly, when you vote with your wallet, you cast a ballot in an election that the people with the thickest wallets always win. No wonder those people have spent so long teaching us that we can't trust our neighbors, that there is no such thing as society, that we can't have nice things. That there is no alternative.

The commercial surveillance industry really wants you to believe that they're good at convincing people of things, because that's a good way to sell advertising. But claims of mind-control are pretty goddamned improbable – everyone who ever claimed to have managed the trick was lying, from Rasputin to MK-ULTRA:

https://pluralistic.net/HowToDestroySurveillanceCapitalism

Rather than seeing these platforms as convincing people of things, we should understand them as discovering and reinforcing the ideology that people have been driven to by material conditions. Platforms like Facebook show us to one another, let us form groups that can imperfectly fill in for the solidarity we're desperate for after 40 years of "no such thing as society."

The most interesting thing about "egging my car" slop is that it reveals that so many of us are convinced of two contradictory things: first, that everyone else is a monster who will turn on you for the pettiest of reasons; and second, that we're all the kind of people who would stick up for the victims of those monsters.

(Image: Cryteria, CC BY 3.0, modified)


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#20yrsago Disney sued by “inventor” of FastPass system https://web.archive.org/web/20041101023537/http://www.patentlyobviousblog.com/2004/10/patent_suit_all.html

#15yrsago Mickey Mouse comics drawn by concentration camp prisoner https://web.archive.org/web/20091103172853/http://www.scribd.com/doc/21860527/Horst-Rosenthal-Mickey-Mouse-in-Gurs

#15yrsago UK ISP TalkTalk threatens lawsuit over 3-strikes disconnection proposal https://www.theguardian.com/media/2009/oct/29/talktalk-threatens-legal-action-mandelson

#15yrsago My Times editorial on British plan to cut relatives of accused infringers off from the net https://www.thetimes.com/article/denying-physics-wont-save-the-video-stars-wf52wrrs2r0

#10yrsago The rise and fall of American Hallowe’en costumes https://www.npr.org/sections/money/2014/10/27/359324848/witches-vampires-and-pirates-5-years-of-americas-most-popular-costumes

#10yrsago Profile of MITSFS, MIT’s 65-year-old science fiction club https://web.archive.org/web/20141023191938/http://www.technologyreview.com/article/531401/60000-books-and-a-few-toy-bananas/

#10yrsago Malware authors use Gmail drafts as dead-drops to talk to bots https://www.wired.com/2014/10/hackers-using-gmail-drafts-update-malware-steal-data/

#10yrsago Verizon’s new big budget tech-news site prohibits reporting on NSA spying or net neutrality https://www.dailydot.com/debug/verizon-sugarstring-us-surveillance-net-neutrality/

#10yrsago Every artist’s “how I made it” talk, ever https://www.youtube.com/watch?v=l_F9jxsfGCw

#10yrago The Terrible Sea Lion: a social media parable https://wondermark.com/c/1062

#10yrsago Opsec, Snowden style https://web.archive.org/web/20141028183511/https://firstlook.org/theintercept/2014/10/28/smuggling-snowden-secrets/

#5yrsago Elizabeth Warren proposes a 4-year ban on government officials going to work for “market dominant” companies https://medium.com/@teamwarren/breaking-the-political-influence-of-market-dominant-companies-8ff27e99ada0

#5yrsago Behind the scenes, “plain” text editing is unbelievably complex and weird https://lord.io/text-editing-hates-you-too/

#5yrsago Despite denials, it’s clear that Google’s new top national security hire was instrumental to Trump’s #KidsInCages policy https://www.buzzfeednews.com/article/ryanmac/miles-taylor-family-separation-dhs-despite-google-denial

#5yrsago 70% of millennials would vote for a socialist https://victimsofcommunism.org/annual-poll/2019-annual-poll/

#5yrsago Davos in the Desert is back, and banks and hedge fund managers are flocking to Mister Bone-Saw’s side https://www.bbc.com/news/business-50219035

#5yrsago Podcast of Affordances: a new science fiction story that climbs the terrible technology adoption curve https://ia903108.us.archive.org/3/items/Cory_Doctorow_Podcast_314/Cory_Doctorow_Podcast_314_-Affordances.mp3

#5yrago Kindness and Wonder: Mr Rogers biography is a study in empathy and a deep, genuine love for children https://memex.craphound.com/2019/10/29/kindness-and-wonder-mr-rogers-biography-is-a-study-in-empathy-and-a-deep-genuine-love-for-children/


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
  • Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025



Colophon (permalink)

Today's top sources:

Currently writing:

  • Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Today's progress: 895 words (73067 words total).
  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025

Latest podcast: Spill, part four (a Little Brother story) https://craphound.com/littlebrother/2024/10/28/spill-part-four-a-little-brother-story/


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

I’ve never used facebook so I can’t speak to the phenomenon there but I occasionally see the exact same garbage on reddit from complete strangers. I’ve thought about this before, the absolutely surreal level of stupid bullshit that the current systems are incentivizing, and this post really narrows in on the heart of it. I was expecting a completely different post based on the title and art (which I love btw, meta Zuck never disappoints… or always disappoints, which is perhaps… the point!) but I was pleasantly surprised.

I found myself wondering how I help this situation. I don’t know my neighbors terribly well. I’m not on nextdoor for the exact same reason I’m not on facebook. But it feels like part of the solution here is getting to know them better. Building trust so they don’t think I would egg their car or vice versa, or any other ridiculous bullshit. The reality is that I’m a very private person but that plays right into the desires of the rich and powerful just as much. My neighbors probably mistrust me simply because I’m not on nextdoor, because they don’t see me much, etc. The balance between privacy and trust is a tricky one. Any information I share with neighbors will go right into their phone / computer / whatever and that is very sensitive info indeed because my neighbors know where I live.

2 Likes

I work in the field of stochastic control, trying to build tools that answer the question of “how do you best react when random events dominate?” My $0.02, but I think the math has some answers, or at least a decent tool set to apply.

The cost of our daily randomness is just noise for wealthy individuals, but more deadly if your wealth is low. When you “vote with your wallet” you try to control for some future outcome, but if your wallet isn’t big enough to survive losing a big bet, then you are constrained to go with a certain outcome. That can be an outcome that is certainly worse for you. The rational, measurable cost of the randomness can be so high that taking a loss makes sense.

You might fill your gas tank now, at a price you think is a little too high, knowing that the local station can change its prices several times a day. The cost of driving to the station and the risk of a higher price is something you might think “isn’t worth it”. You’re probably right, we humans actually do this computation intuitively, and fairly well in a lot of cases. The gas station wins in the end, because you pay more to control your gas costs in the face of the noisy price.

When companies say they can control our decision making process, yes, I agree that it’s nonsense. They are trying to control a person who, in turn, is trying to control their own outcomes in the face of a foggy future. Too many steps removed.

However, take the rational, computable response to uncertainty and add an amplification of the perceived risk. This is where :ru: , :iran: , :cn: and to a lesser extent :north_korea: get the big wins. I’d include the anti-democratic forces in the :us: and :canada: in this. They are trying to make the world seem more dangerous than it is: the neighbour more likely to be a nutball, the foreigner seem like a resource thief. In the face of that uncertainty, even reasonable people will select an outcome which is a net loss, but which has a lower cost than the randomness they face or, in this case, think they face.

AI powered fake accounts are an enormous threat in this game. The noise blocks out the ability to find sensible, net win solutions to our problems, and generates fear that the world is worse than it is. Our global, autocratic adversaries, seeking to ensure their domination in the face of the really big geo-political risks, try cripple our collective decision making. They are probably poorer for it, but believe they don’t face outcomes that involve personal losses they can’t endure. Ironically, some of those outcomes induce climate-related losses, which they won’t survive.

I’ve never had the patience to dig through classical, neo-liberal economic math to make a firm statement on this, but the math of risk and stochastic control is new enough that I’m pretty sure the neo-liberal schools don’t price it properly.

Edit: :coffee: … need :coffee:

2 Likes

your mention of stochastic processes and the influence of random events brings to mind this article that you might find interesting:

Your idea of uncertainty leading to people trading long term gains for short term certainty would just turbo-charge the process modeeled there.

1 Like

That’s a really good question. I like the more dynamic economic models, but they are very hard to do in a way that yields clean results. I’ve seen people get carried away and end up with a mushy, hard to interpret pile of equations. Even the nicely done work ends up with lots of tables summarizing various constraints and outcomes. I’ll have to catch up with some of the papers mentioned in that article; it sounds like they’ve managed to match some real world conditions too.

I suppose you would need an economic model of exchange where one wealthy agent can affect the uncertainty of poorer agents. I’d be surprised if the outcome wasn’t that the wealthy agent gets richer.

If I had a few years “tenure” I would sit down with a friend and colleague, Matheus Grasselli, start with the Stock Flow Consistent economic model work he does, and try to answer that (and a bunch of other questions in a way that doesn’t ignore the price and risk of the ‘options’ we hold in our economy). His papers are nicely developed.

1 Like