How independent writers are turning to AI

On a Tuesday in mid-March, Jennifer Lepp was precisely 80.41 percent finished writing Bring Your Beach Owl, the latest installment in her series about a detective witch in central Florida, and she was behind schedule. The color-coded, 11-column spreadsheet she keeps open on a second monitor as she writes told her just how far behind: she had three days to write 9,278 words if she was to get the book edited, formatted, promoted, uploaded to Amazon’s Kindle platform, and in the hands of eager readers who expected a new novel every nine weeks.

Lepp became an author six years ago, after deciding she could no longer stomach having to spout “corporate doublespeak” to employees as companies downsized. She had spent the prior two decades working in management at a series of web hosting companies, where she developed disciplined project management skills that have translated surprisingly well to writing fiction for Amazon’s Kindle platform.

Like many independent authors, she found in Amazon’s self-service publishing arm, Kindle Direct Publishing, an unexpected avenue into a literary career she had once dreamed of and abandoned. (“Independent” or “indie” author are the preferred terms for writers who are self-publishing commercially, free of the vanity-press connotations of “self-published.”) “It’s not Dostoevsky,” Lepp said of her work, but she takes pride in delivering enjoyable “potato chip books” to her readers, and they reward her with an annual income that can reach the low six figures.

However, being an Amazon-based author is stressful in ways that will look familiar to anyone who makes a living on a digital platform. In order to survive in a marketplace where infinite other options are a click away, authors need to find their fans and keep them loyal. So they follow readers to the microgenres into which Amazon’s algorithms classify their tastes, niches like “mermaid young adult fantasy” or “time-travel romance,” and keep them engaged by writing in series, each installment teasing the next, which already has a title and set release date, all while producing a steady stream of newsletters, tweets, and videos. As Mark McGurl writes in Everything and Less, his recent book on how Amazon is shaping fiction, the Kindle platform transformed the author-reader relationship into one of service provider and customer, and the customer is always right. Above all else, authors must write fast.

Lepp, who writes under the pen name Leanne Leeds in the “paranormal cozy mystery” subgenre, allots herself precisely 49 days to write and self-edit a book. This pace, she said, is just on the cusp of being unsustainably slow. She once surveyed her mailing list to ask how long readers would wait between books before abandoning her for another writer. The average was four months. Writer’s block is a luxury she can’t afford, which is why as soon as she heard about an artificial intelligence tool designed to break through it, she started beseeching its developers on Twitter for access to the beta test.

The tool was called Sudowrite. Designed by developers turned sci-fi authors Amit Gupta and James Yu, it’s one of many AI writing programs built on OpenAI’s language model GPT-3 that have launched since it was opened to developers last year. But where most of these tools are meant to write company emails and marketing copy, Sudowrite is designed for fiction writers. Authors paste what they’ve written into a soothing sunset-colored interface, select some words, and have the AI rewrite them in an ominous tone, or with more inner conflict, or propose a plot twist, or generate descriptions in every sense plus metaphor.

Eager to see what it could do, Lepp selected a 500-word chunk of her novel, a climactic confrontation in a swamp between the detective witch and a band of pixies, and pasted it into the program. Highlighting one of the pixies, named Nutmeg, she clicked “describe.”

“Nutmeg’s hair is red, but her bright green eyes show that she has more in common with creatures of the night than with day,” the program returned.

Lepp was impressed. “Holy crap,” she tweeted. Not only had Sudowrite picked up that the scene Lepp had pasted took place at night but it had also gleaned that Nutmeg was a pixie and that Lepp’s pixies have brightly colored hair.

She wasn’t sure how she felt about using AI, but like many independent authors, she was always quick to adopt technologies that could help streamline her operation. She had already compiled a database of novels to search when she felt she was overusing a phrase and wanted to see how other authors finished the sentence. She told herself she would use Sudowrite the same way — just inspiration, no cutting and pasting its prose. As an independent author, a small increase in production can yield big returns.

Language models like GPT-3 are word-prediction machines. Fed an enormous amount of text, the model adjusts its billions of initially randomized mathematical parameters until, when presented with new text, it does a pretty good job of predicting what words come next. This method gets it surprisingly far. By training on far more text and using far more parameters than past models, GPT-3 gained at least the partial ability to do basic arithmetic, translate languages, write working code — despite never having been explicitly trained in math, translation, or programming — and write plausibly human-seeming prose.

But ultimately, GPT-3’s entire world is words or, to be precise, mathematical representations of common sequences of characters called tokens — and that can cause it to behave strangely. It might happen to give sensible responses when asked about something people have written abundantly and correctly about. But ask which is heavier, a goldfish or a whale, and it will tell you a goldfish. Or ask what Napoleon said about hamburgers, and it will say, “Hamburgers are the food of the gods.” It’s just making a guess based on statistical patterns in language, and that may or may not have any correlation to the world as humans understand it. Like a good bullshitter, it’s better at form and style than substance. Even when writing fiction, where factuality is less of an issue, there’s an art to getting it to do what you want.

The pseudonymous researcher and writer Gwern Branwen calls it “prompt programming,” a term that’s been adopted by AI-using writers. For example, ask GPT-3 to write Harry Potter in the style of Ernest Hemingway, as Branwen did, and it might produce profane reviews or a plot summary in Chinese or total nonsense. But write a few lines of Hemingway-esque Potter fanfiction, and the model seems to grasp what you mean by “style” and keep going. It can then go on to write Harry Potter in the style of P.G. Wodehouse, Jane Austen, and so on. It requires a strange degree of sympathy with the machine, thinking about the way it works and how it might respond to your query. Branwen wrote that it’s a bit like trying to teach tricks to a superintelligent cat.

To create Sudowrite, Gupta and Yu collected plot twists from short stories and synopses of novels, presenting them to GPT-3 as examples. For descriptions, they wrote sentences about smells, sounds, and other senses so that GPT-3 would know what’s being asked of it when a writer clicks “describe.”

And it does generally seem to understand the assignment, though it sometimes takes it in unexpected directions. For instance, Lepp found that the program had a tendency to bestow her characters with swords. Despite there not really being any swords in her version of magical Florida, it would have characters unsheathing blades mid-conversation or fondling hilts as they sat on the porch.

She figures this is because the model was likely trained on far more examples of high fantasy than the much smaller genre of paranormal cozy mystery, so when it sees her writing about magic, it assumes some sword unsheathing and hilt fondling is going to happen. Or, if it sees a pixie and a vampire talking in a parking lot, Lepp said, it’s going to have someone get bit, despite the fact that Lepp’s vampire is a peaceful patron of blood banks. And one can only imagine the size of the romance dataset because it’s constantly trying to make her characters have sex. “I get a lot of, ‘He grabbed her shoulder and wrapped her in his arms,’” Lepp said. “I write cozies! Nobody’s breathing heavily in my books unless they’re jogging.”

There were weirder misfires, too. Like when it kept saying the Greek god Apollo’s “eyes were as big as a gopher’s” or that “the moon was truly mother-of-pearl, the white of the sea, rubbed smooth by the groins of drowned brides.”

Or when it exuberantly overextended metaphors: “Alice closed her eyes and sighed, savoring the moment before reality came back crashing down on them like the weight of an elephant sitting on them both while being eaten by a shark in an airplane full of ninjas puking out their eyes and blood for no apparent reason other than that they were ninjas who liked puke so much they couldn’t help themselves from spewing it out of their orifices at every opportunity.”

A machine learning engineer would call these “hallucinations,” but Lepp, who had begun to refer to Sudowrite affectionately as Skynet — with a personality that was “more cat than dog because it does what it wants” — referred to them as moments when Skynet was drunk.

Gradually, Lepp figured out how to steer the AI. She likened the process to divination. She had to edit and revise its output. But, even then, she found that it lightened the load of a job that, as much as she loved it, was mentally draining. She no longer ended each day struggling to summon the prose she needed to hit her target, exhausted. The words came easier.

When she started using the program, she had told herself she wouldn’t use anything it provided unedited. But she got more comfortable with the idea as she went along.

It’s just words, she thought. It’s my story, my characters, my world. I came up with it. So what if a computer wrote them?

“You are already an AI-assisted author,” Joanna Penn tells her students on the first day of her workshop. Do you use Amazon to shop? Do you use Google for research? “The question now is how can you be more AI-assisted, AI-enhanced, AI-extended.”

Penn, an independent novelist and one of the most outspoken proponents of AI writing, launched her online class last fall to acquaint writers with the growing suite of AI tools at their disposal. She introduces students to AI that will analyze their plot’s structure and recommend changes, AI editors, and other services. She also tries to put her students at ease with what she sees as an inevitable, impending change in what it means to be an author, something not all writers welcome.

“I’ve had more pushback in the last year from the fiction community than I’ve ever had before,” she said. She logged off Twitter for a time because she was receiving so much vitriol. Writers accused her of hastening their replacement by a “magic button that creates a novel,” or of publicizing technology that spammers will use to flood Amazon with generated books, or of violating what Penn sees as a misguided sense of purity: that writing should come from your unique, unaided brain.

The reality, she said, is that AI is advancing regardless of whether novelists want it to, and they can choose to use it or be left behind. Right now, she uses Sudowrite as what she calls an “extended thesaurus.” (There are only so many ways to describe a crypt, she said.) But she foresees a future where writers are more akin to “creative directors,” giving AI high-level instruction and refining its output. She imagines fine-tuning a model on her own work or entering into a consortium of other authors in her genre and licensing out their model to other writers. AI is already being used in photography and music, she said. “Writing is possibly the last art form to be disrupted because it’s so traditional.”

But photo-altering AI tools, to take one of Penn’s examples, did change how people consume and produce photography in ways that are still being sorted out, from getting rid of the assumption that photos depict reality to creating new aesthetics of deliberately imperfect authenticity. Whatever changes AI writing will bring, they are only just beginning, and people’s intuitions are all over the map. When Lepp told readers she was experimenting with AI, one emailed to inform her that if she used it more than 50 percent of the time, she was “cheating.”

In an attempt to establish some standards around AI writing — and thereby help normalize it — Penn reached out to Orna Ross, a historical fiction writer and the founder of the Alliance of Independent Authors, a UK professional organization. Ross’ previous stance on AI was theoretical wariness regarding the morass of copyright issues AI would raise whenever it got good enough to write books. But as soon as Penn showed her Sudowrite, Ross saw the appeal, and together they began soliciting feedback from their peers with the aim of formulating a code of ethical AI conduct.

Rather than decide on strict rules for a technology whose use is still in flux, they ended up listing broad guidelines and leaving it up to authors to make their own ethical decisions. The code reminds writers that “humans remain responsible agents” and must edit and curate anything produced by AI to ensure it isn’t discriminatory or libelous. Writers shouldn’t cut and paste generated text “willy nilly.” The use of AI should be disclosed to readers “where appropriate,” the guidelines read, though, as with so much else, precisely where that line is drawn is left to the author.

AI may just be another tool, but authors haven’t previously felt the need to remind themselves that they — and not their thesaurus — are responsible for their writing or have fraught debates over whether to disclose their use of spellcheck. Something about the experience of using AI feels different. It’s apparent in the way writers talk about it, which is often in the language of collaboration and partnership. Maybe it’s the fact that GPT-3 takes instruction and responds in language that makes it hard not to imagine it as an entity communicating thoughts. Or maybe it’s because, unlike a dictionary, its responses are unpredictable. Whatever the reason, AI writing has entered an uncanny valley between ordinary tool and autonomous storytelling machine. This ambiguity is part of what makes the current moment both exciting and unsettling.

“Using the tool is like having a writing partner,” Ross said. “A crazy one, completely off the wall, crazy partner who throws out all sorts of suggestions, who never gets tired, who’s always there. And certainly in the relationship that I have, I’m in charge.”

She wants it to stay “crazy” even if it means sorting through a fair amount of useless text. She likes that its hallucinatory weirdness sends her in unexpected directions, but it also reassures her that she’s the one guiding the story. Like any collaboration, working with AI brings with it both the possibility of creative frisson and new questions of influence and control.

“You want it maybe a little bit more reined in but not fully reined in because then they cease to be enjoyable,” Ross said. “Then they cease to be tools and become something else.”

Lepp soon fell into a rhythm with the AI. She would sketch an outline of a scene, press expand, and let the program do the writing. She would then edit the output, paste it back into Sudowrite, and prompt the AI to continue. If it started to veer in a direction she didn’t like, she nudged it back by writing a few sentences and setting it loose again. She found that she no longer needed to work in complete silence and solitude. Even better, she was actually ahead of schedule. Her production had increased 23.1 percent.

When she finished the first chapter, she sent it to her “beta readers” — a group that offers early feedback — with special instructions to highlight anything that sounded off or out of character. Nothing seemed amiss.

“That was kind of creepy,” she said. “It starts to make you wonder, do I even have any talent if a computer can just mimic me?”

Worse, some of the sentences her readers highlighted as being particularly good had come from the machine.

But the most disconcerting moment came when she gave the chapter to her husband to read. “He turned to her and said, ‘Wow, you put our favorite sushi restaurant in here,’” Lepp recalled. She hadn’t. It was a scene that was written by the AI.

They went back and forth. “He was insistent,” she said. “I was like, ‘I didn’t write that. I swear to you, I didn’t write it.’ I think that was the first thing that started making me uncomfortable, that something could mimic me with such accuracy that the man I was married to, who knew me better than any person on the planet, couldn’t tell the difference.”

Maybe she was being paranoid, Lepp said, looking back. There are probably a lot of sushi restaurants that could be described as having well-lit booths and wood paneling. But, soon, she noticed other changes. Writing, for her, had always been a fully immersive process. She would dream about her characters and wake up thinking about them. As the AI took on more of the work, she realized that had stopped.

“I started going to sleep, and I wasn’t thinking about the story anymore. And then I went back to write and sat down, and I would forget why people were doing things. Or I’d have to look up what somebody said because I lost the thread of truth,” she said. Normally, she wove a subtle moral lesson through her novels; it’s something her readers liked. But by chapter three, she realized she had no idea what this book’s would be, and she found a moral theme wasn’t something she could go back and retroactively insert. Rather than guiding the AI, she started to think she had “followed the AI down the rabbit hole.”

“It didn’t feel like mine anymore. It was very uncomfortable to look back over what I wrote and not really feel connected to the words or the ideas.”

Authors interested in pushing the boundaries of automated writing can come to Darby Rollins, founder of The AI Author workshop. Rollins specializes in the thriving Kindle genre of expert how-tos: how to network on LinkedIn; how to beat stress and be happy; how to become a bestselling Kindle author. An e-commerce consultant based in Austin, Texas, Rollins had always wanted to write a book but had never known when he could find the time. Then, last year, a friend who worked at an AI writing company called Jasper.ai showed him the tool he’d been developing.

Rollins saw potential for it to do so much more than generate marketing copy and product reviews. “I’m just gonna say screw it. I’m gonna put my head down this weekend, and I’m gonna write a book.” Forty-eight hours later, he had written Amazon Copywriting Secrets and put it up for sale on the Kindle marketplace.

Rollins now co-runs a workshop teaching others to do the same. Most of his students are not attending out of a desire to start literary careers; they are e-commerce consultants like himself or realtors or financial advisers or self-help gurus interested primarily in what having written a book will do for their business. In his workshop, Rollins uses the phrase “minimum viable book.”

“It’s going to be a content cornerstone for your marketing,” Rollins explained. Especially if the book is oriented around often-Googled questions and shows up in search. “Now you’re a thought leader, you’re an expert, you’re an authority, you have more credibility on a topic because you have a book in your hand.”

Getting AI to write a book requires working around its limitations. For one, while Jasper is less likely to lapse into absurd hallucinations than Sudowrite, its voice is far more constrained. Users can tell it to write in whatever style they want, but no matter what I entered, it seemed to speak in what I can only describe as the voice of content itself: upbeat; casually familiar yet confidently expert; extremely enthusiastic. It has a tendency to bring up Elon Musk and hustling.

A more fundamental issue is that the longer a text gets, the more language models struggle. Simply predicting which words most likely come next without an understanding of subject matter makes it hard to craft a coherent argument or narrative. Its ability to structure longer texts is further limited by the fact that GPT-3 lacks the memory to actually read a book or any text longer than 1,500 words or so — though because it has ingested summaries and commentary, it can often discuss popular books with passable coherence. But ask GPT-3 to write an essay, and it will produce a repetitive series of sometimes correct, often contradictory assertions, drifting progressively off-topic until it hits its memory limit and forgets where it started completely.

Jasper gets around these obstacles by using templates that recursively feed GPT-3’s output back on itself. For example, you give it a topic — a review of the best socks in the world — and have Jasper write an outline of the review, then a paragraph about each point in its outline, and then a conclusion summarizing it all. It’s not unlike the formula for five-paragraph essays taught in high school, and it yields similar results: generic yet intelligible with a smattering of wildly wrong facts. “These socks can be worn in any conditions,” it boasts, inventing features like heat-reflecting liners and state-of-the-art moisture wicking. “Sock innovation has come a long way since the first sock was created around 5500 BC by Mesopotamians.”

In Rollins’ template for “the perfect nonfiction book,” the author provides Jasper with a short summary of their topic. Then, Jasper writes “a compelling personal story” about it, followed by some text about “the problem you’re solving,” “the history of the problem,” and so on. It’s necessarily formulaic. The more of the writing process you automate, the more generic you have to be in order to keep the AI on track.

This is true of both form and content. When people have approached Rollins about generating a memoir, he’s turned them down. It’s too specific. But stick to a topic like selling on Amazon, optimizing websites for Google, or self-help, and Jasper produces startlingly adequate copy. There is so much similar how-to writing out there that the AI has plenty to pull from.

Is that so different from what humans do? Rollins wonders. “There’s arguments that no one’s ever thought of an original new thought in a century,” he said. “Everything that’s been said has been said, that we were all just saying stuff that’s a regurgitation of what somebody else has said. So are we really being original in any of our thoughts? Or do we take a thought and then put our own unique perspective on it?”

He isn’t sure. Recently, he’s been working on a novel. It’s about a unicorn who has to defend the world of “Pitchlandia” from the “9-to-5 virus” that siphons creativity. Rollins designed a new template for it, based on Joseph Campbell’s hero cycle, and some of the things Jasper provides make him wonder. A universe of unicorns where each has a “side hustle” and formed a league to protect the realm? “It’s probably developing that from some other concept, all these massive hits follow essentially the same format,” he said. “But you put a different spin on it, and you create a new story.”

In any case, originality isn’t the primary objective for people using Jasper. They’re using it to generate Google-optimized blog posts about products they’re selling or books that will serve as billboards on Amazon or Twitter threads and LinkedIn posts to establish themselves as authorities in their field. That is, they’re using it not because they have something to say but because they need to say something in order to “maintain relevance” — a phrase that I heard from AI-using novelists as well — on platforms already so flooded with writing that algorithms are required to sort it. It raises the prospect of a dizzying spiral of content generated by AI to win the favor of AI, all of it derived from existing content rather than rooted in fact or experience, which wouldn’t be so different from the internet we have now. As one e-commerce Jasper user pointed out, it would be naive to believe most top 10 lists of any product you Google and that would be true whether written by AI mimicking existing content or marketers doing the same.

Reporting this story, I came to realize that there’s a good chance I’ve unwittingly read AI-written content in the wild. In Facebook groups, I’ve seen people show off generated lists of plausible travel tips, pillow reviews, diets, mental health advice, LinkedInspiration, and YouTube mindfulness meditations. It will soon be everywhere, if it isn’t already. A few coherent paragraphs are no longer a certificate of human authorship.

The second thing I realized is that it might not be such a bad thing to have to apply a Turing test to everything I read, particularly in the more commercialized marketing-driven corners of the internet where AI text is most often deployed. The questions it made me ask were the sorts of questions I should be asking anyway: is this supported by facts, internally consistent, and original, or is it coasting on pleasant-sounding language and rehashing conventional wisdom?; how much human writing meets that standard?; how often am I reading with enough attention to notice? If this is the epistemic crisis AI-generated text is going to bring, maybe it’s a healthy one.

As a writer, it’s hard to use these programs and not wonder how you would fare in such a test. So I opened the Jasper blog template and told it to generate some topic ideas about AI writing programs. “How AI writing programs are changing the way we write” was its dispiritingly familiar first option.

“As AI writing programs continue to evolve and improve, they may eventually replace human writers altogether. While this may be true in some cases, it is more likely that AI writing programs will simply supplement human writing skills,” it wrote. “However, to make sure human writers continue to be relevant in this changing world of technology, it is important that these computers do not take over your job! What unique skills or perspectives do YOU bring as a writer?”

Lepp adjusted her approach after her alienating experience following the program’s lead. She still uses Sudowrite, but she keeps it on a shorter leash. She pastes everything she’s written so far into the program, leaves a sentence half-finished, and only then lets it write. Or she gives it the basics of a scene and tells it to write a description of something specific.

“Like I know we’re going into the lobby, and I know that this lobby is a secret paranormal fish hospital for nyads, but I don’t particularly care what that looks like other than that there’s two big fish tanks with tons of fish and it’s high-end,” she explained. So she tells it that, and it gives her 150 words about crystal chandeliers, gold etching, and marble. “My time is better spent on the important aspects of the mystery and the story than sitting there for 10 minutes trying to come up with the description of the lobby.”

She’s a little embarrassed to say she’s become reliant on it. Not that she couldn’t write without it, but she thinks her writing wouldn’t be as rich, and she would certainly be more burnt out. “There’s something different about working with the AI and editing those words, and then coming up with my own and then editing it, that’s much easier. It’s less emotionally taxing. It’s less tiresome; it’s less fatiguing. I need to pay attention much less closely. I don’t get as deeply into the writing as I did before, and yet, I found a balance where I still feel very connected to the story, and I still feel it’s wholly mine.”

With the help of the program, she recently ramped up production yet again. She is now writing two series simultaneously, toggling between the witch detective and a new mystery-solving heroine, a 50-year-old divorced owner of an animal rescue who comes into possession of a magical platter that allows her to communicate with cats. It was an expansion she felt she had to make just to stay in place. With an increasing share of her profits going back to Amazon in the form of advertising, she needed to stand out amid increasing competition. Instead of six books a year, her revised spreadsheet forecasts 10.

Nevertheless, she understands the fears of her fellow authors. For Lepp and her peers, ebooks created an unexpected chance to vault mid-career into a dream job. Reader expectations and Amazon’s algorithms have demanded ever-faster output, and they’ve worked hard to keep up. AI may offer a lifeline now, but what happens when the programs get better — how much more acceleration can authors take? “There’s a concern that we just got our foot in the door; we just got the ability to do this,” she said. “I think everybody’s afraid because we cannot sustain a pace against a computer.”

The technology isn’t there yet. She thinks more fully automating fiction right now would produce novels that are too generic, channeled into the grooves of the most popular plots. But, based on the improvement she’s seen over the year she’s been using Sudowrite, she doesn’t doubt that it will get there eventually. It wouldn’t even have to go far. Readers, especially readers of genre fiction, like the familiar, she said, the same basic form with a slightly different twist or setting. It’s precisely the sort of thing AI should be able to handle. “I think that’s the real danger, that you can do that and then nothing’s original anymore. Everything’s just a copy of something else,” she said. “The problem is, that’s what readers like.”