Is AI the new crypto?

Created
Jan 23, 2023 12:32 AM
Tags

Image generated by Midjourney

image

The nuclear winter techpocalypse arrived: software, SPACs, fintech, and crypto all entered a deep freeze. AI may be the only sector wearing a parka.

Peak AI indicators abound. VC thought pieces and tweetstorms have reached all-time highs. Fair-weather fans from the crypto boom have migrated. MBAs may soon outnumber nerds.

Just a year ago, crypto had comparable froth. Can AI avoid crypto’s bubble-bursting fate? Crypto provides a cautionary tale for excessive hype, with four critical flaws:

  1. Capital: Crypto-specific capital sources cleaved from traditional venture capital, distorting valuations and liquidity timelines.
  2. Mission: Founders lost sight of crypto’s decentralizing founding mission.
  3. People: Token extractionists overpowered ideological builders.
  4. Value creation: Founders generated more value for token holders than users.

Both sectors overheated, but that doesn’t mean they’ll collapse in the same way. If navigated carefully, AI can escape the downfall of inflated expectations.

Capital

In the early 2010s, no institutional investors spent time on crypto. It wasn’t even possible: venture capital LP agreements restricted investments into assets like cryptocurrencies. A few VCs solved the LPA crypto constraint by 2015.

In 2017, institutional crypto investing changed. Every VC started looking seriously. Most startups had a crypto strategy. Dedicated crypto funds emerged. And in 2020, a free-money environment gave funds excess capital to deploy, adding rocket fuel to a speculation-prone category.

Two capital problems took hold: a capital vs. progress mismatch, and VC short-termism.

Constant capital, non-constant progress: A myriad of crypto-specific funds provided a steady flow of capital into the ecosystem. This seemed like a good division of labor: generalist VCs could tune out crypto noise; crypto founders could raise from specialists.

On the way up, fast liquidity for crypto funds meant the specialists quickly scaled up to billions in AUM, rivaling generalist venture fund sizes.

But when the music stopped, the specialist VCs couldn’t evaluate crypto deals against the opportunity cost of startups in other industries. This opportunity cost may seem trivial, but acts as a rate limiter: if industry B’s progress exceeds industry A, more incremental venture dollars flow into industry B.

Constant capital inflows juxtaposed non-constant (declining?) progress in user adoption and commercial success, decoupling valuations from traditional startups. Investors justified valuations self-referentially: “X token is better than Y token worth $10B, so it’s a no-brainer investment at $1B.”

VC short-termism: As a financialized technology, crypto’s allure of getting rich quickly altered the founders and VCs alike.

VCs typically wait 10+ years for a return. But short-termism plagued investors after they saw that they could get token distributions in less than 3 years, in defiance of historical norms. VCs chose short lockups, and encouraged pre-launch token sales to get fast liquidity.

The economic incentives simply haven’t been to build and fund real applications over a long time horizon. Why continue building if you already cashed out?

AI avoids these capital challenges.

Generalist funds keep AI in check: There will continue to be AI-specific funds, but they won’t dominate the industry like crypto funds did. Technology-specific funds tend to focus on infrastructure companies, which is highly oligopolistic – there won’t be many new VC-backed AI infra winners in the 2023 vintage. The real opportunity for VCs today is specific applications.

When a technology becomes ubiquitous, funds dedicated to that technology become meaningless: it wouldn’t make sense to create a fund to invest in companies using databases. The distinction of "AI company" will cease as a common phraseology, save for infrastructure. It is assumed that you leverage machine learning.

What VC firms do well in an AI bull market? Probably the generalists, who see the big picture: buyer psychology, distribution strategy, compounding advantages.

With generalist investors in control, AI shouldn’t have a runaway capital problem. ML-driven products interact deeply with the real economy – customers, competitors, and investors provide regular reality checks.

AI is structurally long-term: Some early employees will profit from secondary tenders, but AI has no obvious pump-and-dump schemes. Quite the opposite:

  • Unlike crypto, AI has no built-in liquidity mechanism.
  • In spite of the hype, the commercialization of AI is nascent – real, durable businesses take time to mature.
  • In a high interest rate environment, only real businesses can exit through the public markets or M&A.

If anything, profiting off of AI will take even longer than the last generation of software and internet companies.

The notion that AGI could be worth infinity dollars may distort people into overvaluing AI assets for the call option value. But the current macro and micro capital contexts should keep the bubble in check.

Mission

Entrepreneurs have tried to conjure digital currency since the 1990s, with years of failed attempts. In 2008, Bitcoin demonstrated that cryptographic proof of work + the blockchain was the first satisfactory solution.

The whitepaper made the use case clear: circumvent financial institutions. Crypto’s mission was anarcho-capitalist and revolutionary. Founders are the best missionaries for their respective technologies, and in Satoshi Nakamoto’s words:

“The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust. Banks must be trusted to hold our money and transfer it electronically, but they lend it out in waves of credit bubbles with barely a fraction in reserve. We have to trust them with our privacy…”

Crypto’s ideology fueled early applications. Decentralized money turned out to be useful, both legitimately and illegitimately: buying pizza pseudonymously, purchasing drugs on the Silk Road, escaping tyrannical governments’ fiat systems. Trade cash for Bitcoin, and you exited the supervision of centralized banking systems.

The mission later became murkier. Crypto’s decentralized purpose got lost: activity became centralized, transactions tracked, and on-ramps added KYC and AML provisions. Maybe crypto had to integrate with the financial system to popularize, but if the blockchain isn’t decentralizing, what is it really doing?

The mission scope crept as crypto grew into web3. Originally about decentralized money, crypto became about decentralizing everything. Builders focused on blockchain-based social networks, decentralized gaming, and NFT ticketing. Crypto is digital money, but people felt it needed to be more.

Even if now corrupted, crypto’s original reactionary mission hits you over the head. The mission is in the founding whitepaper title, the forum discussions, the architecture, the use cases.

If decentralization is the founding crypto mission – what is the AI equivalent? Hollywood isn’t painting a positive picture, featuring AI in every dystopian sci-fi movie: HAL 9000 in Space Odyssey, Skynet in The Terminator, Samantha in Her, Ava in Ex Machina.

Outside of film, people tell credible dystopian AI narratives around entrenching wokeism more deeply, or worse, replacing human work. AI can be a symbiotic partner to human creativity and thought, but the optimistic story needs to be shared.

AI’s positive mission is not as crystallized as crypto’s, or at least not yet.

Unclear founding mission: There’s nothing missionary built into the technology itself, unlike blockchains which are trustless and thus anti-institutional. This could be a good thing: AI is an ideological blank canvas, and can be shaped to match human will.

Many people are excited about the mission of AI. What exactly are they excited about? The ideology is some form of post-scarcity or abundance – a spectrum from abundance capitalism to communism. In AI’s early modern era (mid-2010s), the mission was effective altruism: build the AGI, then give away the profits.

The stated missions of many AI companies rhyme with failed socialist regimes from the 1900s, scaring off conservatives and libertarians. But it’s markedly different: generating abundance through technology has a better precedent than European socialism, which redistributes abundance through political force.

Centralized control: Control over AI’s foundational models seems likely to centralize. Open source models show promise, but large privatized models are far better. Even crypto ended up centralizing far more than its vision suggested.

My best mental model for AI centralization is semiconductors. There are many providers of lagging edge chips, but very few (and valuable) leading edge providers.

Centralization sounds dangerous if we think of political history, but decentralizing the technology isn’t necessarily good either – if AI becomes all-powerful, you would not want everyone to have nuclear launch codes.

Mission neutrality: Most industries have application-infrastructure duality. In software, for example, cloud providers handle infrastructure, while software companies build applications. Bitcoin was monistic: the infrastructure was the application. This meant the mission was defined at the infrastructure level.

Most enabling technologies are relatively missionless: cloud infrastructure, semiconductors, mobile, even the internet. Perhaps AI should be mission-neutral at the infrastructure level, while the mission gets defined at the application layer. DeepMind’s stated mission seems to point in the right neutral direction: “solve intelligence, and then use that to solve everything else.”

Foundational models should power a wide range of application-level missions – political, social, economic. Much like AWS shouldn’t censor developers except in the most extreme cases, large models should avoid censoring AI applications.

Unlike cloud infrastructure, foundational models have training bias. But they also have a fine-tuning lever, allowing application-level mission flexibility. Developers can generate Fox-weighted or NYT-weighted responses. The current political bias in large models has a long-term fix.

AI’s mission still needs refining, and who controls it is an open debate. But in a world where the infrastructure centralizes, the best mission may be neutrality.

People

Crypto’s pioneers were libertarians and anarchists, but disorganized grifters flowed in as the ecosystem matured, attracting regulatory scrutiny.

  • Cultural drift: In the 2008-2016 era, only technologists and ideologues bought and built crypto.Crypto’s original application required its original sin: tokenized architecture. The corrupting force of financialization ultimately overpowered the usefulness of the technology.A few years later, grifters piggybacked on the ICO mania. Easy money corrupted even the puritans: crypto founders vested millions of liquid tokens in 1-2 years without user adoption or managerial oversight.Crypto returns became a hyperstition – a positive feedback loop driven by culture. Hype-driven folklore drove retail adoption: WAGMI, the Bitcoin rainbow chart, the supercycle thesis, “few”. Anyone could profit off of the technological revolution!The hyperstitious retail wave created $3T in market cap during the easy-money ZIRP period of 2020 and 2021. Unfortunately, most people got burned.
  • Regulatory creep: Under the rule of chaotic short-termists, crypto failed to garner regulatory buy-in. Some crypto companies worked to earn institutional trust, while others worked to undermine it. The company closest to regulators undermined regulatory trust the most.The crypto crowd was too anti-institutional to win over regulators, but too uncoordinated to pull off the anti-fiat mission. Buying Bitcoin in 2023 requires submitting your SSN and passport to centralized on-ramps. Each transaction is tracked by intelligence agencies. Regulators have crypto right where they want it.When the bubble popped, the SEC, FinCEN, and OFAC were unrelenting. Given entrenchment of centralized financial institutions, maybe regulatory support was never possible. Either way, centralization won.

AI has a purer talent evolution arc, but the same risk of regulatory scrutiny.

  • Cultural scaling challenges: The formative days of AI development had an ethos of research and publishing, ensuring that technologists, not financialists, stayed in charge.In 2023, AI must absorb an influx of tourists – thin GPT wrappers, MBA tweet threads about AI trends, LinkedIn bios changing from #crypto to #AI. Genuine technologists will join the development effort, but filtering out negative human capital is challenging.Undoubtedly, more builders are needed to apply and productionize the latest technologies, but scaling headcount too fast will cause indigestion. Most great products are built by very few, highly talented people.The best cultural control mechanism is to keep technologists in charge. Crypto’s tokenization element amplified the voices of swindlers. In AI, there’s no trading – participants must build or leverage the core technology. AI promises productivity, not easy money.There is still hope for non-ML engineers. You don’t need a PhD in databases to use one – we’ve ascended the abstraction ladder. As foundational models improve exponentially, it’s becoming easier for generalist engineers to build AI apps. The limiting reagent will be founders and employees with end-market expertise and distribution advantages.
  • Regulatory risks: As any technology scales, people can’t help but to burden it with politics. AI is no exception, particularly given it touches many end consumers. Depending on who you ask, current AI models are too woke or not woke enough.Tropes of AGI replacing human labor predispose regulators negatively. AI safety researchers are working hard to prevent dystopianism from escaping sci-fi movies, but regulators will create strict guardrails as models improve.The strategic importance of AI means that regulators will draw strong geopolitical boundaries. We won’t be using Chinese models to power our applications any time soon. Even AI-powered TikTok is under immense bipartisan scrutiny.

The research-heavy origins and longer-term business models should prevent tourists from taking over. But political controversy and regulation will slow things down.

Value creation

In crypto, people engage in serious debates about whether there’s a single web3 use case that creates user value. Let’s define net value created as:

image

It's unclear if this sum is positive, zero, or negative. The first half of the equation certainly has positive subcomponents – people use Bitcoin to escape tyrannical fiat systems, and pay journalists in oppressive countries.

But there are also negative subcomponents. Engineering time spent on crypto instead of other, higher-ROI technical endeavors. People losing their FTX holdings, BlockFi deposits, and altcoins down 90%.

Crypto leaders claim we’re still in the infra phase. Infrastructure is necessary but not sufficient to generating value – sustained usage trumps trading volume. The developers innovating at the application layer have faced launch delays or active regulatory resistance.

Successful crypto infrastructure startups provide infrastructure for other crypto infrastructure companies and traders. Leverage within the crypto ecosystem ballooned, in the form of trading leverage, crypto-denominated lending, and crypto-based reserve policies.

Web3 lingers in the value-neutral infrastructure phase, encumbered by self-referentiality. There are exciting research frontiers like zero-knowledge proofs, which could unlock new growth. But in most commercial dimensions, adoption has stalled.

On the other hand, people empirically use artificial intelligence to do valuable things.

Big tech has leveraged it for a decade: product recommendations, news feeds, spam filters, ad personalization. Within days of ChatGPT’s launch, there were myriad concrete use cases.

Maybe ChatGPT and generative AI won’t scale into robust, dependable systems. Journalists (who, ironically, may get obsoleted first) think it’s dumb and overhyped. But it’s hard to ignore the practical applications: Copilot accelerates coding, Jasper simplifies copywriting, Midjourney and DALL-E superpower artistry, ChatGPT provides analysis and answers questions.

The negative-value cases are harder to pinpoint. The TikTok algorithm may be the best example of wasted time, though in a world without AI-powered feeds, social media consumption is likely still very high.

Value capture cuts in two ways:

  1. Infrastructure oligopolies: If you believe in continued returns on scaling compute, data, and parameters, the large model race should play out oligopolistically: scaling models in the 2020s is a game that requires billions to play. Value capture for large models could play out like semiconductors, where a handful of winners will emerge per geopolitical region.
  2. Application proliferation: Unlike infrastructure, application value capture will be diffuse. It is late to start a new foundational model company, but VCs have yet to focus on the application layer. Many multi-billion dollar applications will emerge on top of large models.If the foundational models evolve too quickly, that spells trouble for thinner GPT wrappers. But applications with proprietary domain-specific training data, unique distribution, and complex integrations will endure.Some startups pitch themselves as “full stack”, straddling application and infrastructure, building custom models to power domain-specific applications. This will be tough: customizing excellent generalized models is easier than reinventing models from scratch. To use a present-day analogy, very few software companies should be building customized hardware.

A superintelligent AGI could generate negative value – a “Zeus [that] throws lightning bolts at people”, in Peter’s words. This tail risk makes AI safety work worthwhile. But alignment already works in other areas: financial regulations keep capital (an artificial intelligence, of sorts) aligned with humans.

The productionization of AI systems is nascent, but net net, it seems to crush crypto on value creation.

Conclusion

Can AI maintain the faith that crypto lost? It needs to get these things right:

  • Capital: AI has had long capital feedback cycles, which keeps froth in check. But applications must productionize to justify continued capital influx.
  • Mission: AI’s ideology is more of a blank canvas, and the technology is less intrinsically corrupting. But it needs a positive – or at least neutral and apolitical – mission to disprove the dystopian prophecies.
  • People: The industry will have its fair share of grifters, but AI leaders can’t let them have real control over the ecosystem.
  • Value creation: AI adoption seems promising, but transitioning from interesting toys to trusty tools is non-trivial.

In the long term, value creation should dominate, and AI seems to be winning handily. Businesses and consumers benefit from the technology even in its nascency.

People operate under the Gartner hype cycle framework: what goes up must come down.

The Gartner hype cycle

image

Gartner gets the early shape of the curve right, but the plateau of productivity is misleading: it varies dramatically by industry. Some plateaus round to zero – say private cloud computing – while others surpass the peak of inflated expectations.

Crypto could plateau at a fraction of what Gartner suggests. If I had to guess, AI will plateau far above the historical peak.

My proposed modification to the Gartner hype cycle

image

Thank you to Brandon Camhi, Axel Ericsson, Melisa Tokmak, Philip Clark, Sam Wolfe, Anna Mitchell, Russell Kaplan, Michael Solana, John Coogan, and Yasmin Razavi for their thoughts and feedback on this article.