Pluralistic: Bossware is unfair (in the legal sense, too) (26 Nov 2024)

Originally published at: Pluralistic: Bossware is unfair (in the legal sense, too) (26 Nov 2024) – Pluralistic: Daily links from Cory Doctorow



Today's links



A sweatshop: women sit around a table sewing. Through the lone window, we can see a 'code waterfall' effect as seen in the credits of the Wachowskis' 'Matrix' movies. To their left stands a man in a pin-stripe suit, looking at his watch. His body language radiates impatience. His eyes have been replaced by the staring red eyes of HAL9000 from Kubrick's '2001: A Space Odyssey.' Each woman's head is surmounted by a set of floating Victorian calipers.

Bossware is unfair (in the legal sense, too) (permalink)

You can get into a lot of trouble by assuming that rich people know what they're doing. For example, might assume that ad-tech works – bypassing peoples' critical faculties, reaching inside their minds and brainwashing them with Big Data insights, because if that's not what's happening, then why would rich people pour billions into those ads?

https://pluralistic.net/2020/12/06/surveillance-tulip-bulbs/#adtech-bubble

You might assume that private equity looters make their investors rich, because otherwise, why would rich people hand over trillions for them to play with?

https://thenextrecession.wordpress.com/2024/11/19/private-equity-vampire-capital/

The truth is, rich people are suckers like the rest of us. If anything, succeeding once or twice makes you an even bigger mark, with a sense of your own infallibility that inflates to fill the bubble your yes-men seal you inside of.

Rich people fall for scams just like you and me. Anyone can be a mark. I was:

https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security

But though rich people can fall for scams the same way you and I do, the way those scams play out is very different when the marks are wealthy. As Keynes had it, "The market can remain irrational longer than you can remain solvent." When the marks are rich (or worse, super-rich), they can be played for much longer before they go bust, creating the appearance of solidity.

Noted Keynesian John Kenneth Galbraith had his own thoughts on this. Galbraith coined the term "bezzle" to describe "the magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost it." In that magic interval, everyone feels better off: the mark thinks he's up, and the con artist knows he's up.

Rich marks have looong bezzles. Empirically incorrect ideas grounded in the most outrageous superstition and junk science can take over whole sections of your life, simply because a rich person – or rich people – are convinced that they're good for you.

Take "scientific management." In the early 20th century, the con artist Frederick Taylor convinced rich industrialists that he could increase their workers' productivity through a kind of caliper-and-stopwatch driven choreographer:

https://pluralistic.net/2022/08/21/great-taylors-ghost/#solidarity-or-bust

Taylor and his army of labcoated sadists perched at the elbows of factory workers (whom Taylor referred to as "stupid," "mentally sluggish," and as "an ox") and scripted their motions to a fare-the-well, transforming their work into a kind of kabuki of obedience. They weren't more efficient, but they looked smart, like obedient robots, and this made their bosses happy. The bosses shelled out fortunes for Taylor's services, even though the workers who followed his prescriptions were less efficient and generated fewer profits. Bosses were so dazzled by the spectacle of a factory floor of crisply moving people interfacing with crisply working machines that they failed to understand that they were losing money on the whole business.

To the extent they noticed that their revenues were declining after implementing Taylorism, they assumed that this was because they needed more scientific management. Taylor had a sweet con: the worse his advice performed, the more reasons their were to pay him for more advice.

Taylorism is a perfect con to run on the wealthy and power. It feeds into their prejudice and mistrust of their workers, and into their misplaced confidence in their own ability to understand their workers' jobs better than their workers do. There's always a long dollar to be made playing the "scientific management" con.

Today, there's an app for that. "Bossware" is a class of technology that monitors and disciplines workers, and it was supercharged by the pandemic and the rise of work-from-home. Combine bossware with work-from-home and your boss gets to control your life even when in your own place – "work from home" becomes "live at work":

https://pluralistic.net/2021/02/24/gwb-rumsfeld-monsters/#bossware

Gig workers are at the white-hot center of bossware. Gig work promises "be your own boss," but bossware puts a Taylorist caliper wielder into your phone, monitoring and disciplining you as you drive your wn car around delivering parcels or picking up passengers.

In automation terms, a worker hitched to an app this way is a "reverse centaur." Automation theorists call a human augmented by a machine a "centaur" – a human head supported by a machine's tireless and strong body. A "reverse centaur" is a machine augmented by a human – like the Amazon delivery driver whose app goads them to make inhuman delivery quotas while punishing them for looking in the "wrong" direction or even singing along with the radio:

https://pluralistic.net/2024/08/02/despotism-on-demand/#virtual-whips

Bossware pre-dates the current AI bubble, but AI mania has supercharged it. AI pumpers insist that AI can do things it positively cannot do – rolling out an "autonomous robot" that turns out to be a guy in a robot suit, say – and rich people are groomed to buy the services of "AI-powered" bossware:

https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain

For an AI scammer like Elon Musk or Sam Altman, the fact that an AI can't do your job is irrelevant. From a business perspective, the only thing that matters is whether a salesperson can convince your boss that an AI can do your job – whether or not that's true:

https://pluralistic.net/2024/07/25/accountability-sinks/#work-harder-not-smarter

The fact that AI can't do your job, but that your boss can be convinced to fire you and replace you with the AI that can't do your job, is the central fact of the 21st century labor market. AI has created a world of "algorithmic management" where humans are demoted to reverse centaurs, monitored and bossed about by an app.

The techbro's overwhelming conceit is that nothing is a crime, so long as you do it with an app. Just as fintech is designed to be a bank that's exempt from banking regulations, the gig economy is meant to be a workplace that's exempt from labor law. But this wheeze is transparent, and easily pierced by enforcers, so long as those enforcers want to do their jobs. One such enforcer is Alvaro Bedoya, an FTC commissioner with a keen interest in antitrust's relationship to labor protection.

Bedoya understands that antitrust has a checkered history when it comes to labor. As he's written, the history of antitrust is a series of incidents in which Congress revised the law to make it clear that forming a union was not the same thing as forming a cartel, only to be ignored by boss-friendly judges:

https://pluralistic.net/2023/04/14/aiming-at-dollars/#not-men

Bedoya is no mere historian. He's an FTC Commissioner, one of the most powerful regulators in the world, and he's profoundly interested in using that power to help workers, especially gig workers, whose misery starts with systemic, wide-scale misclassification as contractors:

https://pluralistic.net/2024/02/02/upward-redistribution/

In a new speech to NYU's Wagner School of Public Service, Bedoya argues that the FTC's existing authority allows it to crack down on algorithmic management – that is, algorithmic management is illegal, even if you break the law with an app:

https://www.ftc.gov/system/files/ftc_gov/pdf/bedoya-remarks-unfairness-in-workplace-surveillance-and-automated-management.pdf

Bedoya starts with a delightful analogy to The Hawtch-Hawtch, a mythical town from a Dr Seuss poem. The Hawtch-Hawtch economy is based on beekeeping, and the Hawtchers develop an overwhelming obsession with their bee's laziness, and determine to wring more work (and more honey) out of him. So they appoint a "bee-watcher." But the bee doesn't produce any more honey, which leads the Hawtchers to suspect their bee-watcher might be sleeping on the job, so they hire a bee-watcher-watcher. When that doesn't work, they hire a bee-watcher-watcher-watcher, and so on and on.

For gig workers, it's bee-watchers all the way down. Call center workers are subjected to "AI" video monitoring, and "AI" voice monitoring that purports to measure their empathy. Another AI times their calls. Two more AIs analyze the "sentiment" of the calls and the success of workers in meeting arbitrary metrics. On average, a call-center worker is subjected to five forms of bossware, which stand at their shoulders, marking them down and brooking no debate.

For example, when an experienced call center operator fielded a call from a customer with a flooded house who wanted to know why no one from her boss's repair plan system had come out to address the flooding, the operator was punished by the AI for failing to try to sell the customer a repair plan. There was no way for the operator to protest that the customer had a repair plan already, and had called to complain about it.

Workers report being sickened by this kind of surveillance, literally – stressed to the point of nausea and insomnia. Ironically, one of the most pervasive sources of automation-driven sickness are the "AI wellness" apps that bosses are sold by AI hucksters:

https://pluralistic.net/2024/03/15/wellness-taylorism/#sick-of-spying

The FTC has broad authority to block "unfair trade practices," and Bedoya builds the case that this is an unfair trade practice. Proving an unfair trade practice is a three-part test: a practice is unfair if it causes "substantial injury," can't be "reasonably avoided," and isn't outweighed by a "countervailing benefit." In his speech, Bedoya makes the case that algorithmic management satisfies all three steps and is thus illegal.

On the question of "substantial injury," Bedoya describes the workday of warehouse workers working for ecommerce sites. He describes one worker who is monitored by an AI that requires him to pick and drop an object off a moving belt every 10 seconds, for ten hours per day. The worker's performance is tracked by a leaderboard, and supervisors punish and scold workers who don't make quota, and the algorithm auto-fires if you fail to meet it.

Under those conditions, it was only a matter of time until the worker experienced injuries to two of his discs and was permanently disabled, with the company being found 100% responsible for this injury. OSHA found a "direct connection" between the algorithm and the injury. No wonder warehouses sport vending machines that sell painkillers rather than sodas. It's clear that algorithmic management leads to "substantial injury."

What about "reasonably avoidable?" Can workers avoid the harms of algorithmic management? Bedoya describes the experience of NYC rideshare drivers who attended a round-table with him. The drivers describe logging tens of thousands of successful rides for the apps they work for, on promise of "being their own boss." But then the apps start randomly suspending them, telling them they aren't eligible to book a ride for hours at a time, sending them across town to serve an underserved area and still suspending them. Drivers who stop for coffee or a pee are locked out of the apps for hours as punishment, and so drive 12-hour shifts without a single break, in hopes of pleasing the inscrutable, high-handed app.

All this, as drivers' pay is falling and their credit card debts are mounting. No one will explain to drivers how their pay is determined, though the legal scholar Veena Dubal's work on "algorithmic wage discrimination" reveals that rideshare apps temporarily increase the pay of drivers who refuse rides, only to lower it again once they're back behind the wheel:

https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men

This is like the pit boss who gives a losing gambler some freebies to lure them back to the table, over and over, until they're broke. No wonder they call this a "casino mechanic." There's only two major rideshare apps, and they both use the same high-handed tactics. For Bedoya, this satisfies the second test for an "unfair practice" – it can't be reasonably avoided. If you drive rideshare, you're trapped by the harmful conduct.

The final prong of the "unfair practice" test is whether the conduct has "countervailing value" that makes up for this harm.

To address this, Bedoya goes back to the call center, where operators' performance is assessed by "Speech Emotion Recognition" algorithms, a psuedoscientific hoax that purports to be able to determine your emotions from your voice. These SERs don't work – for example, they might interpret a customer's laughter as anger. But they fail differently for different kinds of workers: workers with accents – from the American south, or the Philippines – attract more disapprobation from the AI. Half of all call center workers are monitored by SERs, and a quarter of workers have SERs scoring them "constantly."

Bossware AIs also produce transcripts of these workers' calls, but workers with accents find them "riddled with errors." These are consequential errors, since their bosses assess their performance based on the transcripts, and yet another AI produces automated work scores based on them.

In other words, algorithmic management is a procession of bee-watchers, bee-watcher-watchers, and bee-watcher-watcher-watchers, stretching to infinity. It's junk science. It's not producing better call center workers. It's producing arbitrary punishments, often against the best workers in the call center.

There is no "countervailing benefit" to offset the unavoidable substantial injury of life under algorithmic management. In other words, algorithmic management fails all three prongs of the "unfair practice" test, and it's illegal.

What should we do about it? Bedoya builds the case for the FTC acting on workers' behalf under its "unfair practice" authority, but he also points out that the lack of worker privacy is at the root of this hellscape of algorithmic management.

He's right. The last major updated Congress made to US privacy law was in 1988, when they banned video-store clerks from telling the newspapers which VHS cassettes you rented. The US is long overdue for a new privacy regime, and workers under algorithmic management are part of a broad coalition that's closer than ever to making that happen:

https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy

Workers should have the right to know which of their data is being collected, who it's being shared by, and how it's being used. We all should have that right. That's what the actors' strike was partly motivated by: actors who were being ordered to wear mocap suits to produce data that could be used to produce a digital double of them, "training their replacement," but the replacement was a deepfake.

With a Trump administration on the horizon, the future of the FTC is in doubt. But the coalition for a new privacy law includes many of Trumpland's most powerful blocs – like Jan 6 rioters whose location was swept up by Google and handed over to the FBI. A strong privacy law would protect their Fourth Amendment rights – but also the rights of BLM protesters who experienced this far more often, and with far worse consequences, than the insurrectionists.

The "we do it with an app, so it's not illegal" ruse is wearing thinner by the day. When you have a boss for an app, your real boss gets an accountability sink, a convenient scapegoat that can be blamed for your misery.

The fact that this makes you worse at your job, that it loses your boss money, is no guarantee that you will be spared. Rich people make great marks, and they can remain irrational longer than you can remain solvent. Markets won't solve this one – but worker power can.

(Image: Cryteria, CC BY 3.0, modified)


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#15yrsago Concordia University has a spy-squad that snooped on novelist for “bilingual interests” https://web.archive.org/web/20101119125330/http://artthreat.net/2009/11/concordia-university-spied-novelist/

#10yrsago DC cops budget their asset forfeiture income years in advance https://www.washingtonpost.com/investigations/dc-police-plan-for-future-seizure-proceeds-years-in-advance-in-city-budget-documents/2014/11/15/7025edd2-6b76-11e4-b053-65cea7903f2e_story.html

#10yrsago Analysis of leaked logs from Syria’s censoring national firewall https://www.techdirt.com/2014/11/26/lessons-censorship-syrias-internet-filter-machines/

#10yrsago The Shibboleth, the sequel to The Twelve Fingered Boy https://memex.craphound.com/2014/11/27/the-shibboleth-the-sequel-to-the-twelve-fingered-boy/

#10yrsago Tiny, transforming apartment made huge with massive wheeled storage-compartments https://vimeo.com/110871691

#5yrsago Open Memory Box: hundreds of hours of East German home movies, 1947-1990 https://open-memory-box.de/roll/013-06/00-00-41-20

#5yrsago Talking Adversarial Interoperability with Y Combinator https://www.youtube.com/watch?v=1RsI-Vh-KWI

#5yrsago Debullshitifying the Right to Repair excuses Apple sent to Congress https://www.ifixit.com/News/33977/apple-told-congress-how-repair-should-work-we-respond

#5yrsago NSO Group employees kicked off Facebook for spying for brutal dictators are suing Facebook for violating their privacy https://www.vice.com/en/article/nso-employees-take-legal-action-against-facebook-for-banning-their-accounts/

#5yrsago Amazon secretly planned to use facial recognition and Ring doorbells to create neighborhood “watch lists” https://theintercept.com/2019/11/26/amazon-ring-home-security-facial-recognition/

#5yrsago Great backgrounder on the Hong Kong protests: what’s at stake and how’d we get here? https://www.vox.com/world/2019/8/22/20804294/hong-kong-protests-9-questions

#5yrsago Apple poses a false dichotomy between “privacy” and “competition” https://www.washingtonpost.com/technology/2019/11/26/apple-emphasizes-user-privacy-lawmakers-see-it-an-effort-edge-out-its-rivals/

#5yrsago China wants to lead the UN’s World Intellectual Property Organization https://foreignpolicy.com/2019/11/26/china-bids-lead-world-intellectual-property-organization-wipo/

#1yrago The real AI fight https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
  • Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025



Colophon (permalink)

Today's top sources:

Currently writing:

  • Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Today's progress: 766 words (88164 words total).
  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025

<b

«A “reverse centaur” is a machine augmented by a human […] AI can’t do your job, but that your boss can be convinced to fire you and replace you with the AI that can’t do your job»

Some years ago there was a “near future” science fiction tale with exactly this theme.

  • It started with a version 1 “Employee operating system” that directed skilled workers to do a list of tasks, and then “learned” from the actions of the workers which steps were needed to do the tasks.
  • Then the skilled workers would be replaced by unskilled ones with the “employee operating system” version 2 that told them what to do step by step, and then learned how they performed the steps.
  • Then the unskilled workers would be replaced by robots with version 3 of the “employees operating system” and the machines would perform all the steps.
  • Eventually all workers ended up on welfare in “welfare houses” where the food and water were drugged with mild sedatives and anticonceptionals.

I have been trying to find that story again, but it seems to have disappeared.

However I have a link to another story for an earlier take, “Zero Hours”:

«AI has created a world of “algorithmic management” where humans are demoted to reverse centaurs, monitored and bossed about by an app.»

Actually it is not AI that created that, not even remotely: that situation has been created by a gigantic oversupply of workers, where there are billions of people worldwide desperate to get a job, even a brutal job with a miserly wage, because that is better than the alternative.

As left-wing economist Joan Robinson said some decades ago the only thing worse than being brutally exploited is not being exploited at all (that is to be jobless).

If there is a gigantic oversupply of workers the only options are:

  • Select a small subset of workers, for example those born with USA citizenship, and insulate them fully from competition from other workers, blocking immigration and imports of services and manufactures from offshore.

  • Otherwise accept that any attempt to use the law to improve the working conditions of workers in the USA will result in more immigration or more offshoring of workers very desperate to have any job.

«Taylorism is a perfect con to run on the wealthy and powerful. It feeds into their prejudice and mistrust of their workers, […] It’s producing arbitrary punishments, often against the best workers in the call center.»

The bosses are not really stupid: they know perfectly well that there is something even more important than higher worker productivity, and that is higher worker docility, and that is worth a lot of money.

Smart bosses know that seemingly irrational rules and punishments do not improve productivity but have the far more important effect of intimidating workers into docility, constantly reminding them that they must bend-over because they need the job more than the boss needs them.

“Whipping” workers is not about increasing productivity, but it is all about making sure that workers understand in whose hand the “whip” is held, it is all about leverage.

Let me explain simply the difference between left an right wing:

right-wing: it takes 6 months to find a crummy job, it takes 6 months to find a hovel to rent.

left-wing: it takes 6 months to fill a job vacancy, it takes 6 months to find a tenant.

https://www.bloomberg.com/news/articles/2020-01-24/fed-struggles-to-nail-down-the-meaning-of-full-employment
«William Beveridge, who’s often described as the father of the U.K.’s modern welfare state, elaborated on Keynes’s ideas in a 1944 book titled Full Employment in a Free Society. Its central proposition was that “the market for labor should always be a seller’s market,” where “people actually feel empowered to say, ‘This job is crummy — I’m going to go and get a comparable job right across the street, says David Stein, a historian at the University of California at Los Angeles.»

B. De Mandeville “Essay on charity” (1724)
«The Plenty and Cheapness of Provisions depends in a great measure on the Price and Value that is set upon this Labour, and consequently the Welfare of all Societies, even before they are tainted with Foreign Luxury, requires that it should be perform’d by such of their Members as in the first Place are sturdy and robust and never used to Ease or Idleness, and in the second, soon contented as to the necessaries of Life; such as are glad to take up with the coursest Manufacture in every thing they wear, and in their Diet have no other aim than to feed their Bodies when their Stomachs prompt them to eat, and with little regard to Taste or Relish, refuse no wholesome Nourishment that can be swallow’d when Men are Hungry, or ask any thing for their Thirst but to quench it. […] If such People there must be, as no great Nation can be happy without vast Numbers of them, would not a Wise Legislature cultivate the Breed of them with all imaginable Care, and provide against their Scarcity as he would prevent the Scarcity of Provision it self?»

1 Like

I person that has written a book about that is Joseph Turow in:

The Voice Catchers:
How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet

It was a priority one book in 2021 to get to. It in 2023 came out in paperback (9780300268164) and on DRM free audio The Voice Catchers Audiobook | Libro.fm

The book covers what marketers and companies think they have down pat (with what confidence level and margin off error they achieved is something companies don’t say - and Joseph Turow just relays his finding with talks with companies in what he calls the voice intelligence industry - though he does have some good skepticism as he has covered marketers for decades) such as finding out if a female is pregnant or not by their voice. He also has covered privacy as the subtitle suggests in the above book and the privacy is in last two non-academic books he wrote for the general public.

I was curious if you or anyone here had more information to share or material to read about this? From a quick search, everything I could find mentioned that it did increase productivity, though there were a lot of other aspects to criticize about Taylor’s methods. Please don’t take me wrong, I’m not questioning that this is correct. I am genuinely interested in finding a good source for this and in learning more but I don’t really know where to start looking for it.

*A person…

I don’t want any of those rights because a wise man once told me (it was you) that giving the powerless more rights is just like giving a bullied kid more lunch money. I want companies BANNED from collecting MY data outright. We spent most of recorded history without all this data and we can see pretty clearly now that the predominant uses are not in the interests of the people whose data is being stolen.

For example, you might assume that

reasons there were

your own car