Late last year, Valerie Peter, a twenty-three-year-old student in Manchester, England, realized that she had an online-shopping problem. It was more about what she was buying than how much. A fashion trend of fuzzy leg warmers had infiltrated Peter’s social-media feeds—her TikTok For You tab, her Instagram Explore page, her Pinterest recommendations. She’d always considered leg warmers “ugly, hideous, ridiculous,” she told me recently, and yet soon enough she “somehow magically ended up with a pair of them,” which she bought online at the push of a button, on an almost subconscious whim. (She wore them only a few times. “They’re in the back of my closet,” she said.) The same thing later happened with Van Cleef & Arpels jewelry, after a cast member on the U.K. reality show “Love Island” wore a necklace from the brand onscreen. Van Cleef’s Art Nouveau-ish flower bracelets made their way onto Peter’s TikTok feed, and she found herself browsing the brand’s products. The bombardment made her question: “Is this me? Is this my style?” she said.
In her confusion, Peter wrote an e-mail seeking advice from Rachel Tashjian, a fashion critic who writes a popular newsletter called “Opulent Tips.” “I’ve been on the internet for the last 10 years and I don’t know if I like what I like or what an algorithm wants me to like,” Peter wrote. She’d come to see social networks’ algorithmic recommendations as a kind of psychic intrusion, surreptitiously reshaping what she’s shown online and, thus, her understanding of her own inclinations and tastes. “I want things I truly like not what is being lowkey marketed to me,” her letter continued.
Sign up for the New Yorker Recommends newsletter.
What our staff is reading, watching, and listening to each week.
E-mail address
By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement.
Of course, consumers have always been the targets of manipulative advertising. A ubiquitous billboard ad or TV commercial can worm its way into your brain, making you think you need to buy, say, a new piece of video-enabled exercise equipment immediately. But social networks have always purported to show us things that we like—things that we might have organically gravitated to ourselves. Why, then, can it feel as though the entire ecosystem of content that we interact with online has been engineered to influence us in ways that we can’t quite parse, and that have only a distant relationship to our own authentic preferences? No one brand was promoting leg warmers to Peter. No single piece of sponcon was responsible for selling her Van Cleef jewelry. Rather, “the algorithm”—that vague, shadowy, inhuman entity she referenced in her e-mail—had decided that leg warmers and jewelry were what she was going to see.
Peter’s dilemma brought to my mind a term that has been used, in recent years, to describe the modern Internet user’s feeling that she must constantly contend with machine estimations of her desires: algorithmic anxiety. Besieged by automated recommendations, we are left to guess exactly how they are influencing us, feeling in some moments misperceived or misled and in other moments clocked with eerie precision. At times, the computer sometimes seems more in control of our choices than we are.
An algorithm, in mathematics, is simply a set of steps used to perform a calculation, whether it’s the formula for the area of a triangle or the lines of a complex proof. But when we talk about algorithms online we’re usually referring to what developers call “recommender systems,” which have been employed since the advent of personal computing to help users index and sort floods of digital content. In 1992, engineers at Xerox’s Palo Alto Research Center built an algorithmic system called Tapestry to rate incoming e-mails by relevance, using factors such as who else had opened a message and how they’d reacted to it (a.k.a. “collaborative filtering”). Two years later, researchers at the M.I.T. Media Lab built Ringo, a music-recommendation system that worked by comparing users’ tastes with others who liked similar musicians. (They called it “social-information filtering.”) Google’s original search tool, from 1998, was driven by PageRank, an early algorithm for measuring the relative importance of a Web page.
Only in the middle of the past decade, though, did recommender systems become a pervasive part of life online. Facebook, Twitter, and Instagram all shifted away from chronological feeds—showing messages in the order in which they were posted—toward more algorithmically sequenced ones, displaying what the platforms determined would be most engaging to the user. Spotify and Netflix introduced personalized interfaces that sought to cater to each user’s tastes. (Top Picks for Kyle!) Such changes made platforms feel less predictable and less transparent. What you saw was never quite the same as what anyone else was seeing. You couldn’t count on a feed to work the same way from one month to the next. Just last week, Facebook implemented a new default Home tab on its app that prioritizes recommended content in the vein of TikTok, its main competitor.
Almost every other major Internet platform makes use of some form of algorithmic recommendation. Google Maps calculates driving routes using unspecified variables, including predicted traffic patterns and fuel efficiency, rerouting us mid-journey in ways that may be more convenient or may lead us astray. The food-delivery app Seamless front-loads menu items that it predicts you might like based on your recent ordering habits, the time of day, and what is “popular near you.” E-mail and text-message systems supply predictions for what you’re about to type. (“Got it!”) It can feel as though every app is trying to guess what you want before your brain has time to come up with its own answer, like an obnoxious party guest who finishes your sentences as you speak them. We are constantly negotiating with the pesky figure of the algorithm, unsure how we would have behaved if we’d been left to our own devices. No wonder we are made anxious. In a recent essay for Pitchfork, Jeremy D. Larson described a nagging feeling that Spotify’s algorithmic recommendations and automated playlists were draining the joy from listening to music by short-circuiting the process of organic discovery: “Even though it has all the music I’ve ever wanted, none of it feels necessarily rewarding, emotional, or personal.”
To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video
Scholars have come up with various terms to define our fitful relationship with algorithmic technology. In a 2017 paper, Taina Bucher, a professor at the University of Oslo, collected aggrieved tweets about Facebook’s feed as a record of what she called an emerging “algorithmic imaginary.” One user wondered why her searches for a baby-shower gift had seemingly prompted ads for pregnancy-tracking apps. A musician was frustrated that his posts sharing new songs were getting little attention, despite his best attempts to optimize for promotion by, say, including exclamatory phrases such as “Wow!” There was a “structure of feeling” developing around the algorithm, Bucher told me, adding, “People were noticing that there was something about these systems that had an impact on their lives.” Around the same time, Tarleton Gillespie, an academic who works for Microsoft’s research subsidiary, described how users were learning to shape what they posted to maximize their “algorithmic recognizability,” an effort that he compared to a speaker “turning toward the microphone” to amplify her voice. Content lived or died by S.E.O., or search-engine optimization, and those who learned to exploit its rules acquired special powers. Gillespie cites, as an example, when the advice columnist Dan Savage mounted a successful campaign, in 2003, to overwhelm the Google search results for Rick Santorum, the right-wing senator, with a vulgar sexual neologism.
“Algorithmic anxiety,” however, is the most apt phrase I’ve found for describing the unsettling experience of navigating today’s online platforms. Shagun Jhaver, a scholar of social computing, helped define the phrase while conducting research and interviews in collaboration with Airbnb in 2018. Of fifteen hosts he spoke to, most worried about where their listings were appearing in users’ search results. They felt “uncertainty about how Airbnb algorithms work and a perceived lack of control,” Jhaver reported in a paper co-written with two Airbnb employees. One host told Jhaver, “Lots of listings that are worse than mine are in higher positions.” On top of trying to boost their rankings by repainting walls, replacing furniture, or taking more flattering photos, the hosts also developed what Jhaver called “folk theories” about how the algorithm worked. They would log on to Airbnb repeatedly throughout the day or constantly update their unit’s availability, suspecting that doing so would help get them noticed by the algorithm. Some inaccurately marked their listings as “child safe,” in the belief that it would give them a bump. (According to Jhaver, Airbnb couldn’t confirm that it had any effect.) Jhaver came to see the Airbnb hosts as workers being overseen by a computer overlord instead of human managers. In order to make a living, they had to guess what their capricious boss wanted, and the anxious guesswork may have made the system less efficient over all.
The Airbnb hosts’ concerns were rooted in the challenges of selling a product online, but I’m most interested in the similar feelings that plague those, like Valerie Peter, who are trying to figure out what to consume. To that end, I recently sent out a survey about algorithms to my online friends and followers; the responses I received, from more than a hundred people, formed a catalogue of algorithmic anxieties. Answering a question about “odd run-ins” with automated recommendations, one user reported that, after he became single, Instagram began recommending the accounts of models, and another had been mystified to see the Soundgarden song “Black Hole Sun” pop up on every platform at once. Many complained that algorithmic recommendations seemed to crudely simplify their tastes, offering “worse versions of things I like that have certain superficial similarities,” as one person put it. All but five answered “yes” to the question, “Has ‘the algorithm,’ or algorithmic feeds, taken up more of your online experience over the years?” One wrote that the problem had become so pervasive that they’d “stopped caring,” but only because they “didn’t want to live with anxiety.”
Patricia de Vries, a research professor at Gerrit Rietveld Academie who has written about algorithmic anxiety, told me, “Just as the fear of heights is not about heights, algorithmic anxiety is not simply about algorithms.” Algorithms would not have the power they have without the floods of data that we voluntarily produce on sites that exploit our identities and preferences for profit. When an ad for bras or mattresses follows us around the Internet, the culprit is not just the recommendation algorithm but the entire business model of ad-based social media that billions of people participate in every day. When we talk about “the algorithm,” we might be conflating recommender systems with online surveillance, monopolization, and the digital platforms’ takeover of all of our leisure time—in other words, with the entire extractive technology industry of the twenty-first century. Bucher told me that the idea of the algorithm is “a proxy for technology, and people’s relationships to the machine.” It has become a metaphor for the ultimate digital Other, a representation of all of our uneasiness with online life.
Users can’t be blamed for misunderstanding the limits of algorithms, because tech companies have gone out of their way to keep their systems opaque, both to manage user behavior and to prevent trade secrets from being leaked to competitors or co-opted by bots. Krishna Gade took a job at Facebook just after the 2016 election, working to improve news-feed quality. While there, he developed a feature, called “Why am I seeing this post?,” that allowed a user to click a button on any item that appeared in her Facebook feed and see some of the algorithmic variables that had caused the item to appear. A dog photo might be in her feed, for example, because she “commented on posts with photos more than other media types” and because she belonged to a group called Woofers & Puppers. Gade told me that he saw the feature as fostering a sense of transparency and trust. “I think users should be given the rights to ask for what’s going on,” he said. At the least, it offered users a striking glimpse of how the recommender system perceived them. Yet today, on Facebook’s Web site, the “Why am I seeing this post?” button is available only for ads. On the app it’s included for non-ad posts, too, but, when I tried it recently on a handful of posts, most said only that they were “popular compared to other posts you’ve seen.”
In the absence of reliable transparency, many of us have devised home remedies for managing the algorithm’s influence. Like the Airbnb hosts, we adopt hacks that we hope might garner us promotion on social media, like a brief trend, some years ago, of users prefacing their Facebook posts with fake engagement or wedding announcements. We try to teach recommender systems our preferences by thumbs-downing films we don’t like on Netflix or flipping quickly past unwanted TikTok videos. It doesn’t always work. Valerie Peter recalled that, after she followed a bunch of astrology-focussed accounts on Twitter, her feed began recommending a deluge of astrological content. Her interest in the subject quickly faded—“I began fearing for my life every time Mercury was in retrograde,” she said—but Twitter kept pushing related content. The site has a button that users can hit to signal that they are “Not interested in this Tweet,” appended with a sad-face emoji, but when Peter tried it she found that Twitter’s suggested alternatives were astrology-related, too. “I’ve been trying for a month or two now, but I keep seeing them,” she said. The algorithm gathers information and silently makes decisions for us, but offers little opportunity to communicate back. In the midst of my work on this piece, Gmail’s sorting algorithm decided that an e-mail of fact-checking materials I’d sent to my editor was spam and disappeared it from my “Sent” folder, something I’d never previously experienced and would prefer not to have happen again.
Lately, I have been drawn toward corners of the Internet that are not governed by algorithmic recommendations. I signed up for Glass, a photo-sharing app that caters to professional photographers but is open to anyone. My feed there is quiet, pristine, and entirely chronological, featuring mostly black-and-white city snapshots and wide color landscapes, a mix reminiscent of the early days of Flickr (even if the predominant aesthetic of photography today has been shaped by iPhone-camera-optimization algorithms). I can’t imagine having such a pleasant experience these days on Instagram, where my feed has been overtaken by irritating recommended videos as the platform attempts to mimic TikTok. (Why does the algorithm think I like watching motorcycle stunts?) The only problem with Glass is that there isn’t enough content for me to see, because my friends haven’t joined yet. The gravitational pull of the major social networks is hard to overcome. Since Twitter did away with the desktop version of TweetDeck, which I had used to access a chronological version of my feed, I’ve been relying more on Discord, where my friends gather in chat rooms to swap personal recommendations and news items. But the reality is that much of what I encounter on Discord has been curated from the feeds of traditional platforms. These new spaces on the Internet are a buffer to the influence of algorithms, not a blockade.
In Tashjian’s newsletter, she advised Peter to explore her own tastes outside of social-media feeds. “You have to adopt a rabbithole mentality! Read the footnotes and let one footnote lead to another,” Tashjian wrote. Maybe you find a film that you like, she suggested, and watch all of that director’s other films. Maybe you discover that you want a nightgown and “find a pretty good imitation” of a great one on Etsy. Of course, so many exploratory paths through culture are mediated by algorithms, too. When I went to Etsy’s home page the other day, I was greeted with a display of automatically generated recommendations labelled “New items our editors love.” Perhaps owing to some quirk of my Internet browsing history, these included tote bags with German-language slogans and monogrammed travel mugs. Is there a human curator out there who actually loves these things? When they start popping up in my Instagram feed, will I learn to love them, too? You’d think the algorithm would know me better by now. ♦