Pluralistic: 30 Jan 2022

Originally published at: Pluralistic: 30 Jan 2022 – Pluralistic: Daily links from Cory Doctorow


Today's links



A giant LAN party (Assembly Helsinki 2011) in which every screen has been replaced with the glowing red eye of HAL9000 from 2001: A Space Odyssey.

The battle for Ring Zero (permalink)

In Bruce Sterling's Locus review of my novel Walkaway, he describes the book as "advancing and demolishing potential political arguments that have never been made by anybody but [me]." That is a fair cop. I spend a lot of time worrying about esoteric risks that no one else seems to care about.

https://locusmag.com/2017/06/bruce-sterling-reviews-cory-doctorow/

For example: for twenty years now, I have been worrying about the shifting political, technical and social frameworks that govern our fundamental relationships to the computers that are now woven into every aspect of our civil, personal, work and family lives.

Specifically, I'm worried that as computers proliferate, so too do the harms in which computers are implicated, from cyberattacks to fraud to abuse to blackmail to harassment to theft. That's partly because when a computer is in your door-lock, burglaries will increasingly become cybercrimes. But it's also partly because of something fundamental to the nature of computers: their infinite configurability.

At its very foundational level, the modern computer is "general purpose." Every computer we know how to make can run every program we know how to write. That's why computers are so powerful and so salient: computers can do so many things, and any advance in computing power and efficiency ripples out to all the things computers can do. Investment in improvements to computers used in cars result in advances to computers used in fitness trackers, thermostats and CCTV cameras.

That general-purposeness is a double-edged sword. On the one hand, it means that we don't have to invent a whole new kind of computers to power an appliance like a printer. On the other hand, it means that our printers can all run malware:

https://www.youtube.com/watch?v=njVv7J2azY8

On balance, and without minimizing their harms and risks, I am in favor general-purpose computers. Partly, that's because I think general-purpose computers' contributions to our lives and civilization outweigh the problems of general-purposeness. But, even more importantly, I think that the collateral damage of trying to remove general-purposeness is infinitely worse than even the worst problems created by general-purposeness.

https://memex.craphound.com/2012/01/10/lockdown-the-coming-war-on-general-purpose-computing/

Here's why: we don't actually know how to make a computer that can run some programs, but not all of them. Rather than invent that impossible computer, people who try to solve the problem of general-purposeness try to approximate it by creating computers that are capable of running "bad" programs, but refuse to do so.

There is a vast, important difference between a computer that's not capable of running unauthorized programs and a computer that refuses to run unauthorized programs. The former is an appliance, while the latter is a device that treats its owner and users as potential attackers whose orders can be countermanded by the device's manufacturer.

I can't stress how important this distinction is. Designing a computer that treats the person who depends on it as an attacker is a terrible, nightmarish idea. It is literally the principle that animates the first dystopian science fiction tale: Mary Shelley's "Frankenstein; or, The Modern Prometheus," a 200 year old novel whose lessons we have still not learned.

20 years ago, some Microsoft security engineers came up with a clever thought-experiment about how computers might be redesigned to prevent malicious software from attacking their users. They called it "Palladium" (or, more formally, the "Next Generation Secure Computing Base").

https://pluralistic.net/2020/12/05/trusting-trust/#thompsons-devil

They proposed soldering a second, sealed, cryptographic co-processor onto your computer's motherboard. This co-processor – the Palladium chip – would be tamper resistant, designed to self-destruct if you attempted to decap it or remove it from the board (literally – it would contain brittle, acid-filled compartments that would burst and ruin the chip if you tried).

This cryptographic co-processor could perform two important functions.

First, it could serve as a source of ground truth for your computer. The chip could observe all parts of the boot-up process and create cryptographic signatures denoting which code was loaded at each stage. When your computer was finished booting, the OS could ask the co-processor, "Was I tampered with? Did I boot the original manufacturer's OS, or did someone tamper with me in ways that might blind me to my own processes?" That is, your computer could determine if it was a head in a jar (or a body in the Matrix) or whether it could trust its senses. It could know whether it was running on "bare metal" or inside a virtual machine that could feed it false telemetry that might compromise its user's security.

Second, the secure co-processor could answer challenges from remote parties that wanted to know whether they could trust your machine before they communicated with it. Say I want to send you a Signal message: I trust that Signal is encrypted end-to-end and that means that no one between me and you can read the message in transit. But how can I know if your computer is safe? Maybe it's got malware running on it that will steal the messages after your computer decrypts them and send them to our mutual enemy.

With Palladium, I can send your computer's secure co-processor a random number (called a "nonce," which inevitably confuses British people), and ask it to combine that random number with the manifest of the boot components and OS it observed during the computer's boot, sign it with its secret, private key, and send it back.

That signed manifest lets me do something I could never do before: discover what kind of computer someone else is running, without actually inspecting that computer. Provided I trust the secure co-processor, I can know which OS my counterparty is using. Provided I trust that OS, I can know whether my counterparty's computer will leak the secrets I've sent to them over Signal.

Cool, right?

I think it is cool. This process – called "remote attestation" – is a new theoretical capability for computers, one that is especially useful in an environment in which we communicate over great distances with computers we have no control over. The fact that those computers can run every program – including malicious ones – makes remote attestation even more salient, as it might let us detect and exclude computers that have been compromised from our networked activities.

But it's also incredibly risky. Fundamentally, this security model involves creating computers that override their users and owners, running software that the person using the computer cannot disable or even detect. This security model addresses the fact that users – or processes masquerading as users – can install bad software on our computers by creating a sealed, low-level controller that can interdict user-level processes.

But how can we trust those sealed, low-level controllers? What if manufacturers – like, say, Microsoft, a convicted criminal monopolist – decides to use its low-level controllers to block free and open OSes that compete with it? What if a government secretly (or openly) orders a company to block privacy tools so that it can spy on its population? What if the designers of the secure co-processor make a mistake that allows criminals to hijack our devices and run code on them that, by design, we cannot detect, inspect, or terminate?

That is: to make our computers secure, we install a cop-chip that determines what programs we can run and stop. To keep bad guys from bypassing the cop-chip, we design our computer so it can't see what the cop-chip is doing. So what happens if the cop-chip is turned on us?

This literally keeps me up at night. It is such an obviously terrible idea to built a world of ubiquitous computers that we put our bodies inside of, that we put inside our bodies, that we trust our whole civilization to, that are all designed to run programs we can't see or halt.

Fast forward from Palladium to today. All the risks of secure computing have come to pass. I could give examples from lots of companies' products, but I'm going to stick with Apple. Why Apple? Because they have a huge, talented security engineering division and they have effectively infinite capital. If any company could do secure computing right, it would be Apple. But:

  • Eight years' worth of Apple's secure enclaves are unpatchably compromised:

https://checkm8.info/

  • Apple uses its security measure to block its competitors' app stores:

https://cydia.saurik.com/

  • Apple caved to Chinese state pressure and blocked all working privacy tools from its App Store to enable mass surveillance of Chinese users:

https://www.reuters.com/article/us-china-apple-vpn/apple-says-it-is-removing-vpn-services-from-china-app-store-idUSKBN1AE0BQ

Now, Apple's secure computing infrastructure isn't nearly as comprehensive as the Palladium proposal. As far as I can tell, no one is doing the "full Palladium." But Apple's version of Palladium shares the same foundational problems as Palladium itself, and has some new ones as well.

Apple doesn't use separate secure co-processors to do remote-attestation and check for unauthorized software. Instead, it uses a "secure enclave," which is basically a subsection of the main chip that is subject to heightened testing and design constraints in order to make it as secure as possible. Like a co-processor, the secure enclave is designed to be both inscrutable (users can't inspect or terminate its processes) and immutable (users can't change it).

This means that, by design, any time someone finds and exploits a defect in a secure enclave, it can operate in ways that users can't detect or stop. It also means that there is no way to remediate a defect in a secure enclave: if you can patch a secure enclave to fix a bug, then an adversary could patch it to introduce an exploitable bug.

Like Palladium, secure enclaves are break-once, break-everywhere, break-forever. They have forever-day bugs. But unlike Palladium, secure enclaves are not physically separate from the main processor, making them easier to attack and exploit.

But remember, the most significant attacks on Apple's security were accomplished with Apple's help. Apple uses its security infrastructure to keep users from switching App Stores, and also runs an App Store that, until recently, allowed stalkerware apps and blocked anti-stalkerware apps:

https://www.digit.fyi/stalkerware-apple-victims-comment/

Likewise, when the Chinese government decided to ban its residents from using VPNs to hide their activity from state surveillance, Apple was a willing collaborator, and Apple – not the Chinese state – blocked those privacy tools.

The "insider threat" from this secure computing model is all around us. The fact that one computer can't force another computer to truthfully disclose its operating environment as a condition of further communication helps criminals, but it also helps anyone on the wrong end of a power-imbalance.

Your boss can put a camera in your office, but they can't watch you through the camera in your laptop. Your school district can monitor your in-class conversations, but they can't monitor your networked conversations. Your government can subpoena your communications, but they can't recover the chats you've deleted. Your abusive spouse can make you show them your saved messages, but they can't follow you around when they're at work and find out that you're talking to a shelter.

If your boss or your school district or the cops or the government or your spouse or your parents can detect your operating system at a distance – if they can give it orders that you can't countermand, dictating which software you can run – then your computer becomes their computer. Everything your computer knows about you – or can know about you – they can know, too. They can do it at scale. They can store deep histories and go back to them later when you arouse their suspicion. They can automatically parse through the stream and algorthimically ascribe guilt to you and punish you.

In short, if you're worried about "Disciplinary Technology," you should be really worried about this secure computing model.

Now, it's been 20 years since the first Palladium paper and no one is doing a full Palladium. The incomplete Palladium approximations in the field have only rarely been leveraged for insider attacks.

I think that's a matter of political economy, not technology. The technologists who appreciate the power of remote attestation and secure bootloaders are also largely skeptical of them because they can't be patched and because they can be abused. As a body, these technologists stand up for the right of computer users to understand and alter how their computers work. They defend the right of researchers to disclose defects in computers – even widely used computers and reject the idea of "security through obscurity."

Generally, I think of this as "my side" in this fight. This is the side that rejects disciplinary technology used for trivial or bad purposes:

But not everyone on my side is as foursquare against this stuff as I am. Many of them make exceptions, for example, for corporate compliance systems (to prevent insiders from stealing customer data), or to keep unsophisticated users from installing malicious apps.

I'm a lot more dogmatic about this stuff. Partly, that's because even the "good" uses are ripe for abuse. The same tool that stops an employee from stealing user data also stops whistleblowers from gathering evidence of corporate crimes. App stores that permit stalkerware and block anti-stalkerware aren't doing anything for their users' security.

But let's say that we could fix all that stuff (I don't think we can). I would still worry about this security model because of what it would do the culture of security research and information security overall.

For three decades, security researchers and corporations have sparred over disclosure of defects in digital products and services. The companies argue that they should have a veto over who gets to warn their customers when their products are found defective. They insist that they need this veto, because it lets them fix the bugs before they're disclosed. That may sound reasonable, but in practice, companies routinely fail to fix those bugs or warn their users that they are at risk. Companies are terminally compromised when it comes to bad news about their own products.

Thankfully, we largely operate under an environment in which anyone can disclose true facts about defective products. That's thanks to a combination of the First Amendment and lawyers like EFF who defend programmers, but also thanks to the generally unified position among technologists that security through obscurity is bankrupt.

That's where the real worry comes in. Recall that, by design, secure enclaves and cryptographic co-processors can't be updated – they are break-once, break-forever systems. Any bug in one of these is a forever-day vulnerability.

And there are an increasing number of applications that tech-minded people favor that are heading towards this security model. It's not just compliance and protecting naive users. These days, there's a lot of action in the anti-cheat realm.

Gamers and technologists have a lot of overlap, and cheats ruin games. What's more, the e-sports world has turned these cheats from annoyances into multi-million-dollar spoilers. I worry that we're going to see more and more people switching from the "computers should obey their users" camp to the "computers should control their owners" side.

The cheat/anti-cheat war is getting closer and closer to that model. The rise of "kernel hacks" and "kernel defenses" have moved the fight into "Ring Zero" – the lowest (accessible) level of the computer's operating system. The natural next step? "Ring Minus One" – secure processing modules.

https://www.wired.com/story/kernel-anti-cheat-online-gaming-vulnerabilities/

The worst part of all this is that none of it will accomplish its goals. It doesn't matter how much you lock down a computer, you will never prevent cheats like this one:

https://arstechnica.com/gaming/2021/07/cheat-maker-brags-of-computer-vision-auto-aim-that-works-on-any-game/

I've been pointing this out to "computer controls the user" advocates for twenty years, and they always have the same answer: "It may not stop bad guys, but it keeps honest users honest." As Ed Felten wrote 19 years ago, "Keeping honest users honest is like keeping tall users tall."

https://freedom-to-tinker.com/2003/03/06/keeping-honest-people-honest/

Esoteric as all this stuff might be, it really worries me. Switching to a default assumption that our computers should control us, not the other way around, is a terrifying, dystopian nightmare. It's a live issue: Apple is telling Congress this right now:

https://www.cnbc.com/2022/01/18/apple-says-antitrust-bills-increase-risk-of-iphone-security-breaches.html

Image:
Cryteria (modified)
https://commons.wikimedia.org/wiki/File:HAL9000.svg

CC BY 3.0:
https://creativecommons.org/licenses/by/3.0/deed.en

(Image: Cryteria, CC BY 3.0, modified)



This day in history (permalink)

#20yrsago The guy who stole sex.com says he can't afford a lawyer https://web.archive.org/web/20020201182327/https://www.wired.com/news/business/0,1367,50104,00.html

#20yrsago Wired Wellington, a citywide, publicly owned fiber network https://web.archive.org/web/20020207131813/http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2841197,00.html

#5yrsago Google quietly makes “optional” web DRM mandatory in Chrome https://news.ycombinator.com/item?id=13514415

#5yrsago Beyond the Trolley Problem: Three realistic, near-future ethical dilemmas about self-driving cars http://rodneybrooks.com/unexpected-consequences-of-self-driving-cars/

#5yrsago “Work ethic”: Minutes from 2011 meeting reveal Federal Reserve bankers making fun of unemployed Americans https://theintercept.com/2017/01/27/federal-reserve-bankers-mocked-unemployed-americans-behind-closed-doors/

#1yrago Understanding the aftermath of r/wallstreetbets https://pluralistic.net/2021/01/30/meme-stocks/#stockstonks

#1yrago Thinking through Mitch McConnell's plea for comity https://pluralistic.net/2021/01/30/meme-stocks/#comity

#1yrago Further, on Mitch McConnell and comity https://pluralistic.net/2021/01/30/meme-stocks/#no-seriously



Colophon (permalink)

Currently writing:

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. Friday's progress: 529 words (57078 words total).
  • Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. Friday's progress: 413 words (3380 words total).

  • A Little Brother short story about remote invigilation. PLANNING

  • A Little Brother short story about DIY insulin PLANNING

  • Spill, a Little Brother short story about pipeline protests. SECOND DRAFT COMPLETE

  • A post-GND utopian novel, "The Lost Cause." FINISHED

  • A cyberpunk noir thriller novel, "Red Team Blues." FINISHED

Currently reading: Analogia by George Dyson.

Latest podcast: Science Fiction is a Luddite Literature (https://craphound.com/news/2022/01/10/science-fiction-is-a-luddite-literature/)

Upcoming appearances:

Recent appearances:

Latest book:

Upcoming books:

  • Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin, nonfiction/business/politics, Beacon Press, September 2022

This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/web/accounts/303320

Medium (no ads, paywalled):

https://doctorow.medium.com/

(Latest Medium column: "A Bug in Early Creative Commons Licenses Has Enabled a New Breed of Superpredator" https://doctorow.medium.com/a-bug-in-early-creative-commons-licenses-has-enabled-a-new-breed-of-superpredator-5f6360713299)

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

This topic was automatically closed after 15 days. New replies are no longer allowed.