AI and Leviathan: Part I

Imagine a breakthrough happened and suddenly everyone had access to cheap, x-ray style glasses. The glasses look like normal, everyday glasses, but come with a dial setting that lets you see through walls, people's clothing, etc. It somehow works on old photos and video recordings too.

On one level, this would be amazing. You might notice the mysterious lump on your friend’s thyroid, say, catching their cancer early and saving them untold medical costs. But in the immediate term, universal and near-undetectable access to the glasses (or contact lenses, if you prefer) would be a security and privacy disaster. No one's home or device security was compromised per se. Rather, it's more like a society designed around the visible light spectrum became maladapted overnight.

There are three canonical ways that society could respond:

  1. Cultural evolution: we embrace nudism and a variety of new, post-privacy norms;
  2. Mitigation and adaptation: we start wearing lead underwear and scramble to retrofit our homes, office buildings, and locker rooms with impenetrable walls;
  3. Regulation and enforcement: we ban or tightly regulate the technology and build an x-ray Leviathan for inspecting people's glasses, punishing violators, etc.

The option where everyone spontaneously coordinates to never use the glasses, or to only use them for a subset of pro-social purposes, is unstable. Even if you're a voyeur and access to the glasses benefits you personally, there's an underlying prisoner's dilemma, and so we quickly shift to the equilibrium where everyone has the glasses even if we all preferred the world without them.

The glasses are a metaphor for Artificial Intelligence.

It’s barely a metaphor. AI already unlocks a kind of x-ray vision via the displacement of WiFi signals in a room, and with a few IR sensors you’ll eventually be able to interpolate accurate nude bodies onto every pedestrian in Times Square. The nudist AR filter won’t be allowed in the Apple VisonPro app store, but the basic technology will be sufficiently cheap and open source to make that a moot point.

Signal extraction

AI is a microscope. It increases the information resolution of the universe. This has many amazing implications but also many scary one.

AI lets us refactor legacy code, restore ancient scrolls, and detect new galaxies in old astronomical surveys. At the same time, AI turns your gait into a finger print, a microphone into a key logger, and a snapshot of your face into a > 93% accurate polygraph test.

Jiaqi Geng et al. (2022) “DensePose from WiFi”

image

While deepfakes are a growing concern, AI is overall making the world radically less opaque. Perfect transparency has costs of its own. Knowledge will expand and barriers to manipulating the world will decrease. We will literally hear the cries of stressed out flowers and have conversations with whales. Then, one day, an AI model for completing your sentences will learn to finish your thoughts, smashing the subjective divide that much of Western philosophy assumed was impenetrable.

Many of these use-cases aren’t new, but are only now getting reliable thanks to scaling. If there is signal in the noise, a big enough neural network will extract it, and in a way that’s getting exponentially cheaper and easier to access overtime. You won’t even have to pip install! Your Jarvis-like AI assistant will simply call up bespoke models from HuggingFace or GitHub on demand. For the DnD players out there, it will be like an amulet that grants its wearer a natural 20 on every arcana check.

This sounds liberating, and in many ways it will be. But history also suggest that enhancements to our natural liberty can be paradoxically oppressive, at least in the short run.

To be clear, what follows is not an argument for pausing AI development, even if we could. While I think we will need new oversight mechanisms for deploying frontier models safely as their scale and power ramps up, our goal should not be to stop AI but rather to in some sense master it.

If anything, accelerating the different modalities of defensive AI may be our best hope for making the “mitigation and adaptation” option the path of less resistance. Nevertheless, it’s important to be clear about the many ways in which our AI future is a package deal, and to draw out the implications for the likely evolution of our social order without prejudice.

The State of Nature

Thomas Hobbes famously described life in the “state of nature” as a “war of all against all.” This is sometimes confused as a claim about hunter-gather societies, but Hobbes was actually alluding to the catastrophic conditions of the English Civil War, from which he fled for his personal safety. Leviathan was first published in 1651, near the start of the Interregnum. Having perceived the war’s origins in the growing fragmentation and radicalization of English society under the tenuous, personal rule of King Charles I, Leviathan provided a theoretical argument for why absolute monarchy was the only way to restore peace and order. Hobbes’ notion of a social “covenant” wasn’t a literal contract that individuals signed, but rather an abstract description of the process whereby warring factions achieve a political settlement by ceding their “natural liberty” to interfere in each other’s affairs to a higher authority.

The proximate causes of the English Civil War are complex and manifold, but the macro story is one of disruptive technological change. While the printing press was invented two centuries prior, the War occurred at the inflection of the printing revolution — the point of criticality that unlocked take-off growth in printed outputs. The first publication of the King James Bible was in 1611, for example, just 30 years before the war but only 55 years before the birth of journalism with the first regularly published English newspaper, the The Oxford Gazette.

Jeremiah Dittmar (2011) "The Welfare Impact of a New Good: The Printed Book"

image

Widespread access to printing enabled equally widespread dissent, spurring a “revolt of the public” that mirrored the dynamics Martin Gurri attributes to the internet era. From the English Reformation on, growing access to information thus not only broadened people’s minds to alternative political allegiances, but made them increasingly aware of their differently-thinking neighbors. When Parliament’s legal controls over the press all but collapsed in 1643, its episcopal licensing and censorship regime was turned over to an unregulated market, making it even easier for Puritan separatists, Independents and other radical nonconformists to find likeminded others and coordinate against the Church. The Parliament, meanwhile, was itself contesting the authority of the Crown, creating what the historian Robert Zaller called a new “discourse of legitimacy.” Over a few dizzyingly short years, ideological escalation across multiple fronts finally reached its denouement in regicide and mass violence.

AI is often compared to the printing press, but the parallels between Early Modern England and contemporary America extend beyond the technological. Our cultural and ideological schisms are intensifying; the new Puritans have run headlong into a conservative counter-reaction; and our parliamentary debates revolve around issues of censorship as the prior century’s media controls succumb to the open internet. Even the power of the U.S. Presidency has reached its nadir, as if awaiting an originalist King to reassert the unitary executive and precipitate a crisis.

Accelerating to what?

These schisms are reflected in the different AI camps. Some favor regulating AI development until we can assure perfect safety, perhaps through a licensing regime. Others wish to plow forward, accelerating access through open source.

The English reformer and “intelligencer,” Samuel Hartlib, was the “effective accelerationist” of his day. It was his stated goal to “record all human knowledge and to make it universally available for the education of all mankind,” and he often printed technical texts to circulate at his own expense — the closest thing to open sourcing at the time. Today’s AI evangelists have similarly lofty goals, from Emad Mostaque’s mission to use AI to educate the world’s children, to Marc Andreesen’s vision of AI as a tool for augmenting humanity across virtually every domain.

The accelerationists have the long view of history on their side, and I broadly support the Andreesen vision of the future. Nonetheless, it is always easier to imagine the end point of a technology than to foresee the trade-offs and forking paths that inevitably arise during the transition.

George Hotz’s Chaotic Good instinct to open source potentially dual-use models as quickly as possible has a particularly high potential for blowback. The e/acc ethic is to see open source as pulling forward the advancement of human freedom while validating a DEF CON-style security mindset. As more of the economy flows through proprietary models, open source alternatives will also be an essential check on AI becoming monopolized by the state or a handful of tech behemoth.

Fortunately, there is huge middle ground between the safetyist’s “FDA for AI models” and the rapturous urge to democratize powerful new capabilities the moment the training run ends. If offensive capabilities democratize faster than adaptation and defensive technology can keep up, open source maximalism could even jeopardize the cause of freedom itself, accelerating the conditions for the AI Leviathan that Hotz and company fear most.

Minority rule

Before 2001, airport security was almost invisible. You could even drive between Canada and the US without a passport. Then, one fateful morning in September, 19 terrorists murdered nearly 3000 people in the span of a few hours. One week later, letters containing anthrax spores began showing up in important peoples’ mailboxes, killing five. Eight weeks after that, Richard Reid was caught mid-flight trying to light a bomb hidden in his shoe. Amid this rapid succession of events, our leaders suddenly realized that technology and globalization were introducing a series of asymmetric risks. From bioweapons to hijackings, you now only needed a small group of extremists or a lone radical to wreak havoc across society.

Society responded in all three of the canonical ways listed above. We mitigated risks by fortifying border and airport security. We enhanced enforcement by passing the Patriot Act and funding for new forms of high-tech surveillance. And our culture quickly adapted to the growing security theater, buttressed by fears of Islamic radicalism and an eruption in national solidarity.

In retrospect, we over-reacted. While terrorism loomed large psychologically, the risks were quantifiably small compared to the mundane things that kill people every day. In the 12 months after 9/11, for example, the increased fear of flying plausibly caused an estimated 1,600 excess traffic fatalities. Fear of Islamic terrorism has since subsided, but these enhanced security protocols not only remain in place, but continue to be built upon and even repurposed for domestic threats.

image

Dangerous misuse of AI will probably be similarly rare. Most people are simply not sociopaths. Nevertheless, many areas of life follow the logic of Nassim Taleb’s Minority Rule, in which a small but intolerable minority ends up setting the rule for everyone. Fewer than 2% of Americans have a peanut allergy, for example, and yet many shared spaces ban foods containing peanuts just to be safe.

Depending on how the offensive-defensive balance shakes out, AI has the potential to return us to a world in which security is once again invisible. You won’t have to remove your shoes and debase yourself before a TSA agent. Instead, a camera will analyze your face as you walk into the terminal, checking it against a database of known bad guys while extracting any predictive signal hidden in your facial expressions, body language, demographic profile, and social media posts.

Airports are a bit of a special case, however. As walled gardens, airport security can leverage the benefits of vertical integration. And as gateways to faraway destinations, travelers are forced to either sign-away certain rights or take the bus. Most shared spaces aren't like this. From small businesses to the public park, our daily security largely depends on the goodwill and discretion of other people. Thus, to the extent AI creates security and privacy risks everywhere, more and more shared spaces may end up operating like an airport, i.e. like a surveillance state in miniature.

In turn, the coming intelligence explosion puts liberal democracy on a knife edge. On one side is an AI Leviathan; a global singleton that restores order through a Chinese-style panopticon and social credit system. On the other is state collapse and political fragmentation, as our legacy institutions fail to adapt and give way to new, AI-native organizations that solve the principal-agent problem and unlock localized collective action at blazing speeds. This includes the fortification of privacy and security, obtained in part through an opt-in covenant to leave one’s bioweapons and x-ray glasses at the proverbial door.

Peering through the event horizon of this latter path, the contours of a new, quasi-medieval social order begin to take shape. But I’m running up against Substack’s character limit, so that will have to wait for Part II.

In the meantime, enjoy this podcast I recorded with Erik Torenberg: