Words With Web3’s King: An Interview With Chris Dixon

image

Chris Dixon is the cofounder and former CEO of SiteAdvisor (acquired by McAfee) and Hunch (acquired by eBay), cofounder of the seed venture fund Founding Collective, a General Partner at the greatest VC firm of all time, Andreessen Horowitz, and the crypto-king of Twitter. He’s currently sitting at #7 on Forbes’ Midas list of the top 100 venture capital investors on planet earth, a list he’s been on for several years now. He holds a BA and MA in philosophy from Columbia and an MBA from Harvard. Chris is without a doubt one of the best people I’ve ever had the honor of talking to and, while we’re all still living in 2021, he’s sitting comfortably in the next century. If you want to grasp the subtle, dynamic, and groundbreaking trends of today and vault yourself headlong into a brilliant and boundless tomorrow, if you want to truly understand some of the most important happenings in technology and the world at large, follow Chris on Twitter and check out his edifying blog here!

Besides Balaji Srinivasan and a few others I have in mind you are by far the most future-oriented person I’ve ever seen online and your investment/founder track record shows that you have extraordinary prescience. You’ve made predictions recently about budding revolutions in Internet architecture with Web 3 and the future of creator-centric economies with NFTs which have made me wonder about something I had asked Ben Horowitz in a recent interview: There are exciting things happening in the world of bits and computing without a doubt, but is there anything left to be excited about in the world of atoms? Can things like crypto help us live like The Jetsons?

Great question. A couple of points.

First I’d argue that the digital world and the physical world are already deeply interconnected and getting more so everyday. The internet is a global brain that increasingly orchestrates the global body. One of the myths around Jetsons-like robots is that automation happens visibly, with a 1:1 correspondence between the thing replaced and the thing replacing it. In fact, automation arrives in far subtler, more mundane ways: a database installation here, an API integration there. The internet is a network of networks, some public, like the web or Twitter, and some private, within social groups or companies. Collectively these networks orchestrate more and more of what happens in the physical world.

Relatedly, I think we continue to underestimate the significance of the internet and the digital world. You see this in subtleties of language: the digital world gets subordinating prefixes like the “e” in email and ecommerce, the physical world is called the “real world,” and so on. You see this in how digital innovations like social media were dismissed as toys for so many years. It’s pretty obvious today that social media is driving global politics, business, and culture — but people didn’t see it that way until a few years ago — and they are probably still underestimating it.

We are very early in the development of the internet. Historically, with really major tech breakthroughs, in the first few decades you get first-order effects. The first order effect of the automobile was getting you from one place to another faster. But the really profound things happened in the next phase from second-order effects: suburbs, commuting, highways, trucking, fast food and big box retail, eventually e-commerce, ride sharing, and so on. We are just now entering the second order effects phase of the internet, and it will take many decades to play out.

All that said, I do agree with the view that physical world innovation has been frustratingly slow. Compare today to what was probably the period of peak physical-world innovation, the Victorian-era Industrial Age (roughly the second half or the 19th century). Back then you basically had hundreds of Elon Musks all at once: Edison, Westinghouse, Tesla, Ford, Bell, the Wright brothers, and so on. The railroad, the lightbulb, the phonograph, the film projector, the telephone, the car, the airplane— these were massive inventions that were all developed in a fairly short period of time.

What’s different today? One is there is just less unclaimed space. Most locations in the developed world have vested interests in a way they didn’t back then. California still has the mentality of the frontier. How many products that you use everyday, and were invented in the last 50 years, were made in California vs the East Coast? For most people, the answer is 100% California. But back in the 19th century, there were real frontiers and you could be a lot more aggressive. The movie industry is in LA because they were literally escaping Edison’s patent lawsuits. You could simply build things faster in unclaimed space.

Another related argument is that we now have excessive regulation. I used to think this was just an eccentric libertarian view, but after being involved in various physical world businesses over the years and seeing how much red tape and regulatory capture exists, I’m a lot more sympathetic to this view. Partly this is about incumbent industries setting rules to protect themselves, and partly this is a result of choices we’ve made as a society. In many physical-world domains, there is a trade off between innovation and safety. Commercial flight is a good example—it’s close to perfectly safe today, but there is also basically zero innovation. These things are closely related. You see the same thing in drug discovery (Eroom’s law), drones, self-driving cars, and in different ways in finance, food, health care, and education. I don’t have an answer here but I think it’s important to understand the trade-offs. For whatever reason we’ve decided to turn the dial toward taking very few risks.

One concept I think about at least once a week is one that Marc Andreessen introduced me to, that we don’t go online anymore as we did in the early days of the internet as a funny interruption to our analogue life, we’re always there, analogue life is now the interruption. Real life had a good run for a few thousand years but now it’s being outcompeted, so maybe innovation slowing down in physical space just matters less, but something counterintuitive to me is happening here. We’ve made something that we seem to enjoy a lot more than anything that came before it—social networking, gaming, manicured and curated voyeurism—but it doesn’t necessarily seem like we are happier. I’m living in something far cooler than The Jetsons and had no idea. But why does the best invention ever seem to fall short of making us happy?

I’d argue the best way to think about the history of the internet is in distinct epochs. The modern internet that we experience today started at scale around 2012. The internet of the 90s and most of the 2000s was very different from today. For most people back then the internet meant a desktop computer you sat down and set aside time to use, maybe every few days, to check email or stock prices or research a vacation or whatever. If you want evidence of this go watch movies or TV shows from that era. The TV show Friends ran from 1994-2004 and it’s full of obsolete devices like payphones and answering machines. The internet barely exists except as a curiosity. There’s a revisionist history that the internet followed a mostly smooth adoption curve, and if you cherry pick certain charts you can make a case for this, but in fact the adoption was much bumpier and episodic.

Around 2012 smartphones got to a scale where we transitioned from the desktop era where going online was a discrete event to the mobile era where we’re always at least partially online and the internet is ubiquitous. 2012 was the year Facebook made their “mobile pivot” and in general when the tech industry realized that mobile wasn’t a sideshow but the main show. Steve Jobs famously said phones are like cars and PCs are like trucks. You use a PC for specialized work but mobile is much more general, spanning work and leisure. We’ve had internet trucks since the 90s but internet cars for less than a decade.

So to answer your question: it’s early. We are barely a decade into the modern era. The most popular applications are early and, I would argue, fundamentally flawed. Take Twitter, which is probably the most visible example of both the good and bad of the modern internet. On the good side: I can look down on my phone and see the most interesting thoughts from the most interesting people on the most interesting topics. Twitter is an ongoing global tournament to rank people by interestingness as determined by likes and follower counts and machine learning algorithms. Who’s more interesting: the person who happens to be sitting next to you in real life or the most interesting person in the world talking about your favorite topic? No wonder everyone is staring at their phone.

Then there is the ugly side. People tend to be more civil the more intimate the media: video and audio over text, synchronous over asynchronous. They also tend to be more civil in smaller communities with shared interests and values. Twitter takes the least intimate medium — asynchronous short-form text — and smashes together thousands of unrelated communities in one global feed. A favorite weapon is the cross-community quote-tweet dunk: hey, politics Twitter, look at this dumb out-of-context quote from this person from tech or cooking or TV Twitter. Out-of-context short-form asynchronous text shared across disparate communities without context is basically designed to incite. It doesn’t help that the tournament rewards you for this behavior, and that the whole thing is driving an ad-based business model that encourages quantity over quality.

The other obvious problem with today’s internet is the economic structure. Chris Anderson wrote a famous essay called “The Long Tail” back in 2004 that predicted the internet would make media businesses less hit-driven and improve the economics for niche creative people. He was right in one sense: the internet created many more niche communities and media stars. But he was wrong that this would benefit creators economically, because in the meantime the tech platforms inserted themselves in the money flow and extracted the vast majority of the economics. So I think a lot of the unhappiness today stems from the feeling that we live in a period of digital feudalism. Someone works to build an audience on top of Twitter or YouTube or Instagram but they only get a fraction of the value they create, and survive only at the whims of the platforms and their algorithms.

My view is the root cause of a lot of these issues is a deep structural misalignment between the needs of their underlying corporate structure of companies like Twitter and the needs of the networks they operate. There are other ways to build these networks. Email is a good example, which runs on an open protocol called SMTP that isn’t owned by anyone- it’s public good. There is no corporation behind SMTP turning dials to target ads or increase engagement. Anyone can build their own email client if they think they can do a better job. Email is one of the major protocols from the first era of the internet, what we call web 1. Twitter is a web 2 network that added more advanced functionality but at a high cost — handing all the money and control to a private company. There is now a way to build networks that are the best of both worlds: public goods with advanced modern functionality. It’s what we call web 3 and it’s what I work on day-to-day.

I may or may not be right about this direction as a way to improve the internet. But the idea that this early on with an invention of this scale we should give up, or assume monopoly regulation is the only way out, or draw grand conclusions about how things failed, seems to me excessively pessimistic. It’s early— let’s keep working on it.

I have a few thoughts about the interplay between content creators, consumers, and advertisers on Web 3 that I need help clearing up and there’s no one better to ask, so here goes. The dominant economic models on Web 2 platforms are either advertiser or subscription based—these models have relatively small payoffs (or no payoffs at all) for the users who generate value and who also have the downside of being subject to sudden, adverse platform rule-changes and privacy breaches, but have the benefit of making content free or almost free for non-participants. Web 3 changes this by making the creator-consumer relationship direct, fans pay directly for content and content-creators control everything about this dynamic. But where do advertisers come in when there’s no centralized platform to deal with? Will the dominant form of advertising on Web 3 be something like direct creator sponsorships?

Let’s talk about advertising. First, it’s helpful to distinguish two types of advertising: direct response and brand advertising. Direct response is advertising where the user has already indicated some purchasing intent that the ad simply tries to satisfy. You have a lawnmower shop in Newark and are looking for new customers; a person in Newark types “lawn mower” into Google and your website shows up. As with most things on the internet, direct response advertising has a darker side, as seen in email spam and low-end banner ads. But overall it plays a role similar to old-fashioned yellow pages and is generally a good thing.

The other major kind of advertising is brand advertising. The biggest brand advertisers sell the most generic products like cleaning supplies and processed food. They take commodity products and try to create brand loyalty through emotion-driven advertising. Hence the importance of running repetitive TV and internet ads.

I don’t think anyone really likes these ads. The pro argument is that they subsidize free products. But where is this money coming from? Internet advertisers closely track the return on their ad spend. In aggregate, in the steady state, when an advertiser spends $1, they are getting back more than $1 from the targeted users. So the money is ultimately coming from the users. There is no free lunch.

A more sophisticated version of the pro-advertising argument is that it’s a form of bundling. (I’ve written about bundling before and how it can be a good thing here). The argument is essentially that there are significant groups of people at the lower end of the income spectrum who get access to free, high-quality software by essentially piggybacking off the wealthier users who advertisers want to target. So advertising let’s people implicitly “pay” different rates for software depending on their means. Without advertising, the argument goes, you’d end up with a digital caste system where wealthier people get all the good software.

The problem with this argument is there are plenty of other ways to get the same effect without using advertising. The two obvious examples that come to mind are 1) freemium software like Dropbox where the free tier is a solid standalone product and you only need to pay for heavy usage, 2) free-to-play games like Fortnight where the game itself is free and you only pay for extras like skins. (NFTs/virtual goods specifically do a really good job tiering prices for different users along the demand curve — more here). A huge number of the past decade’s “unicorn” startups have had transaction models with a free tier. So you don’t need advertising to make software broadly accessible.

A few numbers. Online advertising averages 1% of US GDP — roughly $200B out of a $20T economy— important, but not critical. For context the US video game business is about $65B, with about $20B coming from virtual goods. I think with new web3 business models we could easily 10x the virtual goods economy. So I don’t see why replacing advertising with other models would have a negative economic impact.

Advertising at its core is a market for attention. In any future version of the internet, I’d expect markets for attention to pop up. It’s a natural barter between one party with a surplus of attention and another party with a deficit. It particularly makes sense for direct response advertising. An auction for purchasing intent is a very good way to match a lawnmower shop with a person looking to buy a lawnmower. Markets like this will probably continue to exist on Google or elsewhere.

Brand advertising is another story. Maybe it would be a good thing if cleaning supplies and food companies started competing on innovation instead of ads? I’ll leave that to others to decide. What I can say for sure is we don’t need ads to fund internet services — there are better ways now.

I want to switch gears for a bit here and ask something related to your background in philosophy, which just really stands out to me as someone with 1,000 unread philosophy books glaring at me from my bookshelf. I’m assuming you studied analytic philosophy because that’s the closest form of philosophy I can imagine to things like computer science, but I’m wondering what Chris Dixon thinks about the fundamental questions that make up the more continental tradition, the big questions, especially what it means to be decent and moral. What does it mean to be someone good, someone worth investing in, or where can we go to find out? What are some experiences, books, people, etc. that have maybe inspired you toward being who you want to be?

I did study philosophy, mostly analytic as you guessed — logic, language, mind, science, etc. I was actually in a PhD program and dropped out when I realized I’d be a mediocre philosopher at best :) To be honest my philosophy is so rusty I’m not sure I am that qualified to talk about. Mainly what I read these days is history. So for example earlier when I talked about the Victorian Industrial Age — I’ve tried to read every book I can about that. I spent a year once reading about the history of computer science going all the way back to the logicians and philosophers who in my view kicked things off (that led to this essay)

My follow-up here is something a bit more down to earth but still related to the theme of identifying good things: There’s been a reported 50-70% increase in internet use since the start of the pandemic, a steep addition to the 11 hours a day we already spent looking at some sort of screen during pre-covid times. That’s a lot of time spent sitting down. What are the secret VC ways of staying healthy post-covid? Do you work out? I recently read a pandemic report from Harvard Health showing an above normal year-over-year increase in weight for about 40% of patients in a sample of 15 million, and have been wondering more about how to strategize around health. What kind of things are you doing (or thinking) to keep fit despite how easy it is to just sort of let go?

First, let me say I’m not qualified to give any health/nutrition/exercise advice :) But I’ll share my own habits. My personal routine is 1) I go jogging almost every day, 2) I do some intermittent fasting 3) I avoid sugar and processed food. I also generally lean toward eating keto -- I eat mostly meats and vegetables. I like cooking and generally try to buy simple, unprocessed foods and experiment with different ways to make them. The basic philosophy is to aim for balance and focus on the macro over the micro.

By the way, I think it makes sense to apply a similar philosophy to one’s mental life. So for example, my ideal morning schedule (although I usually don’t have time for this) would be 1) one hour of exercise in a meditative / flow state 2) one hour of creative activities/arts (reading a novel, writing), 3) one hour of math/science/programming.

I also try to pay close attention to my media diet. I find it’s much easier to control the inputs vs how you process those inputs. I studiously avoid corporate media and other secondary sources and get almost all my news from primary sources: one-on-one conversations, Twitter and Substack, video interviews, financial reports, technical papers, primary data sources, and so forth.

I read almost everyday about waning trust in legacy media and I can’t figure out exactly what the root issue is or if things have just been this way historically—if the news has always just sucked. What exactly is wrong with mainstream media that makes it unreliable and sometimes even hostile, as it has been to some of our friends in tech? Is it malformed incentives, or maybe something to do with politics? Why can’t corporate outlets with all the money and power and expertise in the world do what writers on Substack do in their sleep? It’s a strange dilemma we face today that “The News” is something that has to be eschewed, and millions of people feel this way and have questions.

I’ll just speak about tech, which is the area I know the best. Tech news used to be pretty straightforward: they’d review products, report on corporate events, and so forth. As the tech industry grew more powerful, the coverage became politically motivated, with an obvious anti-tech bias. In the tech subfield I work in — crypto/web3 — the corporate media coverage is relentlessly misleading and negative. The conflict of interests are stark. For example, Bloomberg is a company that makes the bulk of its revenue renting computer terminals to banks and hedge funds, and, unsurprisingly, is probably the most negative on crypto. It would be as if Exxon owned a news company that covered clean energy.

Twitter has had a big effect here. First, because Twitter moves so fast, it drives the news cycle. What you read on most mainstream news sites is what was on Twitter twelve hours ago. Second, for whatever reason, reporters on Twitter don’t bother feigning objectivity the way they do on the news sites. So you can see their biases quite clearly. Third, Twitter gives the subjects of articles a chance to respond. Before Twitter, when articles were misleading, you might see a letter to the editor a few days later. Now you can see instant responses and contradicting evidence.

Another contributor here is the shift away from advertising toward subscription business models. Advertising incentivizes appealing to a broad audience. Subscriptions incentivize riling up a narrow audience. So the incentive is to lean into the biases and double down on tribalism. Unfortunately, I think these trends in corporate media will only get worse. The good news is that, with social media, readers have more choices than ever, and high-quality independent writers are popping up all the time.

You mentioned before that you spend time everyday on artful and creative areas and I want to get your thoughts on the changes happening in this space, particularly because our art and culture generating institutions seem to be flagging in the same way as legacy media and maybe it has something to do with what you described as the speed allowed by our information mediums. I can see it in film, music, literature, architecture, everything pre-90s had an aspiration to grandness and now things are sort of flat, and I wonder if current tech has shifted the time reference frame for large-scale builds into a tendency toward delay-discounting. We maybe don’t value anything that can’t be built now, posted now, distributed now, things that may take years to complete, whose payoffs we may not even enjoy ourselves. Our tools are amazing and we can do everything at break-neck pace, but our preference for building beautiful, long-lasting things has slowed down; how do we solve this problem?

I guess my question is whether it’s true that art and culture are flagging. Maybe it’s just that the energy has shifted to different forms of media. There are great films being made today but they are called TV shows and shown on streaming services like Netflix and HBO. The best business books are Twitter threads, the best media criticism are subreddits, and the best political commentary is on Substack. Triple-A games like Grand Theft Auto are incredible achievements that take thousands of people years to make. Kids build epic virtual worlds in Minecraft and Roblox. Maybe the feeling that things are in decline is just a bias against new forms of media. Socrates thought writing destroyed the mind, and the novel as a literary form was originally considered vulgar. My suspicion is it’s just the age-old pattern of disliking new things.

This is a good place to talk about Facebook’s rebranding as Meta and the future of AR/VR social (and other) media. The initial reaction on Twitter seemed mixed, with a slight bias against the more immersive internet vision Zuck detailed in his keynote speech, but maybe this group is being a little like Socrates when he saw his fellows writing things down. What are some of your thoughts on Meta and future adoption of AR/VR?

There are multiple layers here. First, I believe VR will be a major computing platform, as or more significant than the phone and PC were. Facebook should sell about 10 million Oculus Quests this year. The headset will continue to get smaller, lighter, higher res, more powerful, and with many more and better apps. This will drive exponential growth the same way it did with PCs and smartphones.

Facebook has on the order of 10,000 designers and engineers working on AR/VR. They are making a massive investment, and it appears to be working. The only other plausible contender right now is Apple. There are rumors they have a serious effort, but they are ultra secretive so we don’t really know. Google and Amazon don’t seem to be investing in AR/VR at all. Microsoft a little more so, mostly in AR, focusing on business use cases. So there is a real chance that Facebook ends up in a dominant position, similar to the way Apple is today with the iPhone. Hopefully more competition comes, and we end up with more credible choices for users and developers.

On the one hand, I’m excited about VR. I think it will unlock a huge wave of creativity among software developers, visual artists, game designers, and so forth. The most exciting thing to me about new computing platforms (PC, mobile, blockchains, VR) is not the computers themselves but the 2nd-order effects of all the new applications people invent. So I’m excited about that.

I have mixed feelings about Facebook being the likely winner here. On the one hand, it’s the last founder-run of the big 5 tech companies, and I admire Zuck’s vision and aggression. On the other hand, Facebook has a history of creating closed systems that don’t interoperate, charge high take rates, arbitrarily change rules on developers, and other things that I think stifle innovation and are generally bad for the world. With Oculus they are already behaving this way, charging a high 30% take rate to developers and excluding lots of great apps. So I do hope some deep-pocketed competition emerges.

Inspired by the AR/VR point, my last question is: When it comes to a lot of new technologies, early adopters look crazy at first, end up with swollen coffers eventually, and then almost anything related to the technology gets hyped up to be the Next Big Thing, and this environment can be hard to parse. So how can we tell the difference between a good opportunity and plain old FOMO?

A lot of factors go into distinguishing the next big thing from overhyped dead ends. For me, two of the biggest factors are 1) whether there are underlying forces driving an exponential improvement curve, and 2) the correlation between how informed people are on the topic and how excited they are. Let me explain each of these.

There was a movie called Back to The Future 2 made in 1989 where the main character travelled to 2015. In the imagined 2015 there are flying cars but people still had to use phone booths because they didn’t have ubiquitous cell phones. Why did we — in reality — end up with portable internet-connected supercomputers and not flying cars? Because computers and the internet benefited from a number of exponential improvement trends. One trend was that the underlying infrastructure including chips, bandwidth, and storage all got exponentially better. The other trend was an explosion of computing applications, which created a reinforcing feedback loop where better applications beget better infrastructure, and vice versa.

For whatever reason flying car technology didn’t follow an exponential improvement path. Why some areas have these exponential improvement curves and others don’t is an interesting question. For flying cars it might be some combination of physics, regulations, and/or lack of investment. Software and the internet has generally benefited from lots of investment, not many physical limitations, and, until the past decade or so, not a lot of regulation.

The other big factor for me is, when I start to get interested in a topic, I try to go talk to the smartest enthusiasts and smartest critics. Sometimes as you do this you discover the critics are really well informed and have strong arguments. Other times you discover the opposite. I specifically decided to devote my career to crypto/web3 a few years ago, when I went through this exercise. I have yet to meet a smart and well-informed critic of crypto/web3. If you know anyone who says they are, I’d love to debate them :) Generally they are just repeating factually inaccurate news they read about the topic. Meanwhile I meet with super talented, highly technical founders all the time who have devoted their lives to web3. I’ve never seen a field where the knowledge gap between the enthusiasts and critics is so wide. That doesn’t mean it’s guaranteed to work. We still need the exponential forces to kick in, but I think that’s starting to happen as well.

(Bonus Question)

Are we all gonna make it?

Yes, my friend, we are all definitely going to make it :) Thank you for the excellent questions.

Follow Chris Dixon on Twitter if you want to boost your IQ a whole standard deviation (confirmed) and check out his excellent blog for more!