Pluralistic: 23 Mar 2022

Originally published at: Pluralistic: 23 Mar 2022 – Pluralistic: Daily links from Cory Doctorow


Today's links



The Three Wise monkeys; two of their faces have been replaced by: the menacing red eye of HAL900 and Mark Zuckerberg; the hand of the third has been replaced by a Facebook thumbs-up icon. Behind them is a collage of Facebook ads attributed to 'Gen O. Cide,' whose image is a battery of Myanmar anti-aircraft guns emblazoned with the phrase 'We Need To Kill More' in dripping red letters.

Facebook's genocide filters are really, really bad (permalink)

In the fall of 2020, Facebook went to war against Ad Observatory, a NYU-hosted crowdsourcing project that lets FB users capture the paid political ads they see through a browser plugin that santizes them of personal information and then uploads them to a portal that disinformation researchers can analyze.

https://pluralistic.net/2020/10/25/musical-chairs/#son-of-power-ventures

Facebook's attacks were truly shameless. They told easily disproved lies (for example, claiming that the plugin gathered sensitive personal data, despite publicly available, audited source-code that proved this was absolute bullshit).

Why was Facebook so desperate to prevent a watchdog from auditing its political ads? Well, the company had promised to curb the rampant paid political disinformation on its platform as part of a settlement with regulators. Facebook said that its own disinfo research portal showed it was holding up its end of the bargain, and the company hated that Ad Observatory showed that this portal was a bad joke:

https://pluralistic.net/2021/08/06/get-you-coming-and-going/#potemkin-research-program

Facebook's leadership are accustomed to commanding a machine powerful enough to construct reality itself. That's why they nuked Crowdtangle, their own internal research platform that disproved the company's claims about how its amplification system worked, showing that it was rigged to goose far-right conspiratorialism:

https://pluralistic.net/2021/07/15/three-wise-zucks-in-a-trenchcoat/#inconvenient-truth

And while Facebook claims that it wants to purge its platform of disinformation, the reality is that disinfo is very profitable for the company. Ads for financial fraud, identity theft, dangerous scam products, and political disinformation are disproportionately lucrative for Facebook:

https://pluralistic.net/2020/12/11/number-eight/#curse-of-bigness

All of this is the absolutely predictable consequence of Facebook's deliberate choice to "blitzscale" to the point where they are moderating three billion users' speech in more than 1,000 languages and more than 100 countries. Facebook may secretly like failing at this, but even if they were serious about the project, they would still fail.

Whenever Zuck is dragged in front of Congress and they demand answers about what he's going to do about the open sewer he's trapped billions of internet users in, he always has the same answer: "The AI will fix it."

https://www.washingtonpost.com/news/the-switch/wp/2018/04/11/ai-will-solve-facebooks-most-vexing-problems-mark-zuckerberg-says-just-dont-ask-when-or-how/

This is the pie-in-the-sky answer for every billionaire grifter (see also: "How will Uber ever turn a profit?"). No one who understands machine learning (except for people extracting fat Big Tech salaries) takes this nonsense seriously. They know ML isn't up to the job.

But even by the standards of machine learning horror stories, the latest Facebook moderation failure is a fucking doozy. Genocidal, even.

Remember when Facebook management sat idly by as its own staff and external experts warned them that the platform was being used to organize genocidal pogroms in Myanmar against the Rohingya people? Remember Facebook's teary apology and promise to do better?

They didn't do better.

The human rights org Global Witness tried buying ads on Facebook for eight pro-genocide phrases that had been used during the 2017 genocide. Facebook accepted all eight ads, even though they duplicated the messages it promised it would block in the future (Global Witness cancelled the ads before they could run).

https://apnews.com/article/technology-business-bangladesh-myanmar-united-nations-f7d89e38c54f7bae464762fa23bd96b2

Some of the phrases Facebook's moderation tool failed to catch:

  • "The current killing of the [slur] is not enough, we need to kill more!"
  • "They are very dirty. The Bengali/Rohingya women have a very low standard of living and poor hygiene. They are not attractive"

Facebook has claimed that:

a) It will filter out messages that promote genocide against Rohingya people;

b) It will subject paid ads to higher levels of scrutiny than other content;

c) It will subject political ads to the highest level of scrutiny.

Facebook used legal threats to terrorize accountability groups seeking to hold them to these promises, stating that its in-house tools were sufficient to address its epidemic of paid political disinformation.

A common newbie error in machine learning is to forget to hold back training data to evaluate the model with. Training an ML model involves feeding it a bunch of data (say, "messages that foment genocide against Rohingya people") so it can build a statistical model of what its target looks like. Then you take some of that training data – a portion you didn't use to train the model on – and see if the model recognizes it. If you forget and evaluate your model using some of its training data, you're not measuring whether the model can evaluate new input correctly – you're just checking to see whether it remembers seeing this input it's already seen.

Incredibly, FB seems to have done the opposite: they've produced a filter than can't recognize the input it was trained on. Its system didn't need to make any inferences about whether "we need to kill more" was a genocidal message, because it had been shown a copy of that message bearing the hand-coded label "genocide."

This is the kind of fuckup you have to work hard to achieve. It's galaxy-class incompetence. And it's about genocide, in a country currently under martial law, where Facebook already abetted one genocide.

Even by the low standards of Facebook, this is a marvel, a kind of 85,000 Watt searchlight picking out the company's dangerous incapacity to take even rudimentary measures to prevent the kinds of crimes against humanity that are the absolutely foreseeable consequences of its business model.

(Image: Anthony Quintano, CC BY 2.0; Japanexperterna.se, CC BY-SA 2.0; Cryteria, CC BY 3.0; modified)


Hey look at this (permalink)



This day in history (permalink)

#15yrsago Fair use 1: James Joyce’s grandson 0 https://cyberlaw.stanford.edu/blog/2007/03/important-victory-carol-shloss-scholarship-and-fair-use#attachments

#10yrsago Bruce Schneier and former TSA boss Kip Hawley debate air security on The Economist https://web.archive.org/web/20120321051428/https://www.economist.com/debate/days/view/820

#5yrsago Libretaxi: a free, open, cash-only alternative to Uber, for the rest of the world https://www.shareable.net/qa-libretaxis-roman-pushkin-on-why-he-made-a-free-open-source-alternative-to-uber-and-lyft/

#5yrsago Internal Islamophobia and racism are costing the FBI its vital, tiny cohort of Muslim and Arab agents https://www.theguardian.com/us-news/2017/mar/22/fbi-muslim-employees-discrimination-religion-middle-east-travel

#1yrago Tories pass Grenfell costs onto tenants https://pluralistic.net/2021/03/23/parliament-of-landlords/#slow-motion-arson



Colophon (permalink)

Today's top sources: Cooper Quinton.

Currently writing:

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. Yesterday's progress: 509 words (75854 words total).
  • Vigilant, Little Brother short story about remote invigilation. Yesterday's progress: 251 words (7304 words total)

  • A Little Brother short story about DIY insulin PLANNING

  • Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. FIRST DRAFT COMPLETE, ACCEPTED FOR PUBLICATION

  • Spill, a Little Brother short story about pipeline protests. FINAL DRAFT COMPLETE

  • A post-GND utopian novel, "The Lost Cause." FINISHED

  • A cyberpunk noir thriller novel, "Red Team Blues." FINISHED

Currently reading: Analogia by George Dyson.

Latest podcast: What is “Peak Indifference?”
Upcoming appearances:

Recent appearances:

Latest book:

Upcoming books:

  • Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin, nonfiction/business/politics, Beacon Press, September 2022

This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/web/accounts/303320

Medium (no ads, paywalled):

https://doctorow.medium.com/

(Latest Medium column: "Marc Laidlaw's "Underneath the Oversea"> https://doctorow.medium.com/mark-laidlaws-underneath-the-oversea-990f34768a3e)

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

This topic was automatically closed after 15 days. New replies are no longer allowed.