Pluralistic: 26 Mar 2021

Originally published at: https://pluralistic.net/2021/03/26/overfitness-factor/


Today's links



Dreaming and overfitting (permalink)

I'm not the first person to note that our understanding of ourselves and our society is heavily influenced by technological change – think of how we analogized biological and social functions to clockwork, then steam engines, then computers.

I used to think that this was just a way of understanding how we get stuff hilariously wrong – think of Taylor's Scientific Management, how its grounding in mechanical systems inflicted such cruelty on workers whom Taylor demanded ape those mechanisms.

But just as interesting is how our technological metaphors illuminate our understanding of ourselves and our society: because there ARE ways in which clockwork, steam power and digital computers resemble bodies and social structures.

Any lens that brings either into sharper focus opens the possibility of making our lives better, sometimes much better.

Bodies and societies are important, poorly understood and deeply mysterious.

Take sleep. Sleep is very weird.

Once a day, we fall unconscious. We are largely paralyzed, insensate, vulnerable, and we spend hours and hours having incredibly bizarre hallucinations, most of which we can't remember upon waking. That is (objectively) super weird.

But sleep is nearly universal in the animal kingdom, and dreaming is incredibly common too. A lot of different models have been proposed to explain our nightly hallucinatory comas, and while they had some explanatory power, they also had glaring deficits.

Thankfully, we've got a new hot technology to provide a new metaphor for dreaming: machine learning through deep neural networks.

DNNs, of course, are a machine learning technique that comes from our theories about how animal learning works at a biological, neural level.

So perhaps it's unsurprising that DNN – based on how we think brains work – has stimulated new hypotheses on how brains work!

Erik P Hoel is a Tufts University neuroscientist. He's a proponent of something called the Overfitted Brain Hypothesis (OBH).

To understand OBH, you first have to understand how overfitting works in machine learning: "overfitting" is what happens when a statistical model overgeneralizes.

For example, if Tinder photos of queer men are highly correlated with a certain camera angle, then a researcher might claim to have trained a "gaydar model" that "can predict sexual orientation from faces."

That's overfitting (and researchers who do this are assholes).

Overfitting is a big problem in ML: if all the training pics of Republicans come from rallies in Phoenix, the model might decide that suntans are correlated with Republican politics – and then make bad guesses about the politics of subjects in photos from LA or Miami.

To combat overfitting, ML researchers sometimes inject noise into the training data, as an effort to break up these spurious correlations.

And that's what Hoel thinks are brains are doing while we sleep: injecting noisy "training data" into our conceptions of the universe so we aren't led astray by overgeneralization.

Overfitting is a real problem for people (another word for "overfitting" is "prejudice").

Hoel advances this argument in a fascinating, short, accessible 2020 Arxiv open-access paper called "The Overfitted Brain: Dreams evolved to assist generalization."

https://arxiv.org/pdf/2007.09560.pdf

The paper demonstrates how the OBH resolves a lot of mysteries from previous theories advanced to explain dreaming. For example, it explains why dream-deprived subjects' performance fails on generalized performance tasks (which require extrapolation) but not rote tasks.

I learned about the paper from Peter Watts, an evolutionary biologist with a knack for turning scientific concepts into revelatory plot elements. His depiction (in MAELSTROM, 2002) of human/computer pathogenic co-evolution haunts me.

http://locusmag.com/2018/05/cory-doctorow-the-engagement-maximization-presidency/

Watts's blog-post on Hoel's paper is a great breakdown of the explanatory power of OBH, including (especially) why dreams are so weird – a proposed solution to one of the enduring scientific mysteries that dreams create.

https://www.rifters.com/crawl/?p=9844

Watts connects Hoel's work to another paper, this one studying lucid dreaming, in which researchers are able to have two-way conversations with lucid dreamers while they are dreaming (!):

https://www.cell.com/current-biology/fulltext/S0960-9822(21)00059-2

In very Wattsian fashion, he wonders what this kind of injection of rationality into dreams might do to cognition, if Hoel is right and the irrationality is a feature, not a bug. You can see the beginnings of another banger of a sf premise stirring there.

Hoel is also an sf writer, as it turns out, and his debut novel, THE REVELATIONS, drops in mid-April: a murder mystery about "neuroscience, death, and the search for the theory of human consciousness."

https://www.erikphoel.com/

(Image: Gontzal García del Caño, CC BY-NC-SA, modified)



Dirty NYPD cops can't lose (permalink)

The NYPD is a notoriously corrupt institution, whose indiscriminate acts of violence and murder have steadily worsened for decades. A powerful police union and a cowed City Hall ensure that even the worst cops rarely have any kind of reckoning.

After a series of legal wrangles – a New York state law, a lawsuit by the police union, and Propublica's brave decision to publish – we finally got a glimpse at the buried horrors in the NYPD disciplinary files.

https://pluralistic.net/2020/07/27/ip/#nypd-who

We also learned about the impunity enjoyed by dirty cops, including the cops who were caught on camera breaking the law to brutalize and maim protesters in last summer's BLM uprising.

https://pluralistic.net/2021/03/18/news-worthy/#nypd-black-and-blue

However, there are instances of police abuse that are so egregious and well-documented that the officers involved face some kind of consequences.

For example, Officer Vincent D'Andraia faces criminal charges and a civil suit after he was recorded brutalizing Dounya Zayer last summer.

https://twitter.com/JasonLemon/status/1266529475757510656

NYC's Law Department has announced that it won't provide D'Andraia with a lawyer. That may sound like he's being cut loose, but as a joint The City/Propublica article by Jake Pearson explains, that isn't true.

https://www.thecity.nyc/2021/3/26/22351475/nypd-union-contract-defend-officers-when-the-city-wont

That's because the city's contract with the NYPD's union mandates the creation and funding of a secretive slush-fund that is used to hire white-shoe, high-powered private sector lawyers to defend cops so dirty the City's own lawyers won't touch them.

The deal has been in place since 1985, and it requires the city to divert $75 per officer ($2m/year) into a defense fund that cops get to dip into "when the City of New York fails or otherwise refuses to provide a legal defense."

https://www.documentcloud.org/documents/20509353-pba_civil_legal_representation_fund_8112_73117

Nominally, this fund is off limits in case "directly or indirectly adverse to the interests of the City," but this is meaningless: when someone sues over police brutality, the City is usually a co-defendant, meaning defending the dirty cop is in the City's interest.

The City's contract with the Police Benevolent Association – the NYPD's union – expired in 2017 and will likely be renegotiated by whomever wins the upcoming NYC mayoral race.

As Pearson notes, the $75/officer fund has become standard – Rikers' guards and police brass all got similar deals after the PBA deal was struck.

These deals mean that even when cops and guards commit offenses so grotesque the City won't defend them, NYC's taxpayers do.

Police reform is on the ticket in the mayoral race. NYC pays out hundreds of millions of dollars every single year to settle claims against its officers, but its contracts with the PBA make those officers not just un-fireable but immune to any consequences.

(Image: Teresa Shen, CC BY)



This day in history (permalink)

#15yrsago DRM is Killing Music https://www.voidstar.com/node.php?id=2686

#15yrsago Swisscom WiFi at London conference centre costs $838.73/24h https://web.archive.org/web/20060329090917/https://benhammersley.com/FCE47259-78BA-4B5E-ABF2-F39B93520C85/Blog/C9043A4D-F791-4B7F-A8A7-3484779B4748.html

#15yrsago Most expensive Google ad keywords https://web.archive.org/web/20060325094245/http://www.cwire.org/2006/03/23/updated-highest-paying-adsense-keywords/

#15yrsago LA Times slams Marvel for trying to steal “superhero” https://www.latimes.com/archives/la-xpm-2006-mar-26-ed-superhero26-story.html

#5yrsago Jerks were able to turn Microsoft’s chatbot into a Nazi because it was a really crappy bot https://www.vice.com/en/article/mg7g3y/how-to-make-a-not-racist-bot

#5yrsago What you think about Millennials says a lot about you, nothing about them https://www.mic.com/articles/138525/comedian-nails-the-one-simple-thing-adults-can-do-to-connect-with-young-people

#1yrago Sanders on GOP stimulus cruelty https://pluralistic.net/2020/03/26/badger-masks/#unlimited-cruelty

#1yrago Canada nationalizes covid patents https://pluralistic.net/2020/03/26/badger-masks/#c13

#1yrago The ideology of economics https://pluralistic.net/2020/03/26/badger-masks/#piketty



Colophon (permalink)

Currently writing:

  • My next novel, "The Lost Cause," a post-GND novel about truth and reconciliation. Yesterday's progress: 682 words (120480 total).
  • A cyberpunk noir thriller novel, "Red Team Blues." Yesterday's progress: 1000 words (40443 total).

Currently reading: Analogia by George Dyson.

Latest podcast: Free Markets https://craphound.com/podcast/2021/03/22/free-markets/
Upcoming appearances:

Recent appearances:

Latest book:

Upcoming books:

  • The Shakedown, with Rebecca Giblin, nonfiction/business/politics, Beacon Press 2022

This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/web/accounts/303320

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

You had me at “human/computer pathogenic co-evolution.” Seriously though, what wormhole. Thanks for all the links.

1 Like

You won’t be disappointed!

I think you got overfitting wrong with neuronal networks. I learned that overfitting is when the neuronal network kind of “memorizes” the training set and poorly generalizes. So it perform bad on the test set. https://en.wikipedia.org/wiki/Overfitting
When the neuronal network learns some bias in the training set, like camera angle or blacks are criminals, if this is a good performing generalization (with regard to the training and test set), this is fine computational speaking. That is how they work. You never know what crap they learn.

This topic was automatically closed after 15 days. New replies are no longer allowed.