Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Confessions of a Viral AI Writer

To revist this article, visit My Profile, then View saved stories.To revist this article, visit My Profile, then View saved stories.Vauhini VaraSix or seven years ago, I realized I should learn about artificial intelligence. I’m a journalist, but in my spare time I’d been Writing a speculative novel set in a world ruled by a corporate, AI-run government. The problem was, I didn’t really understand what a system like that would look like.I started pitching articles that would give me an excuse to find out, and in 2017 I was assigned to profile Sam Altman, a cofounder of OpenAI. One day I sat in on a meeting in which an entrepreneur asked him when AI would start replacing human workers. Altman equivocated at first, then brought up what happened to horses when cars were invented. “For a while,” he said, “horses found slightly different jobs, and today there are no more jobs for horses.”This article appears in the October 2023 issue. Subscribe to WIRED.The difference between horses and humans, of course, is that humans are human. Three years later, when Open-AI was testing a text generator called GPT-3, I asked Altman whether I could try it out. I’d been a Writer my whole adult life, and in my experience, writing felt mostly like waiting to find the right word. Then I’d discover it, only to get stumped again on the next one. This process could last months or longer; my novel had been evading me for more than a decade. A word-generating machine felt like a revelation. But it also felt like a threat—given the uselessness of horses and all that.OpenAI agreed to let me try out GPT-3, and I started with fiction. I typed a bit, tapped a button, and GPT-3 generated the next few lines. I wrote more, and when I got stuck, tapped again. The result was a story about a mom and her son hanging out at a playground after the death of the son’s playmate. To my surprise, the story was good, with a haunting AI-produced climax that I never would have imagined. But when I sent it to editors, explaining the role of AI in its construction, they rejected it, alluding to the weirdness of publishing a piece written partly by a machine. Their hesitation made me hesitate too.I kept playing with GPT-3. I was starting to feel, though, that if I did publish an AI-assisted piece of writing, it would have to be, explicitly or implicitly, about what it means for AI to write. It would have to draw attention to the emotional thread that AI companies might pull on when they start selling us these technologies. This thread, it seemed to me, had to do with what people were and weren’t capable of articulating on their own.There was one big event in my life for which I could never find words. My older sister had died of cancer when we were both in college. Twenty years had passed since then, and I had been more or less speechless about it since. One night, with anxiety and anticipation, I went to GPT-3 with this sentence: “My sister was diagnosed with Ewing sarcoma when I was in my freshman year of high school and she was in her junior year.”Medea GiordanoKashmir HillDhruv MehrotraMatt SimonGPT-3 picked up where my sentence left off, and out tumbled an essay in which my sister ended up cured. Its last line gutted me: “She’s doing great now.” I realized I needed to explain to the AI that my sister had died, and so I tried again, adding the fact of her death, the fact of my grief. This time, GPT-3 acknowledged the loss. Then, it turned me into a runner raising funds for a cancer organization and went off on a tangent about my athletic life.I tried again and again. Each time, I deleted the AI’s text and added to what I’d written before, asking GPT-3 to pick up the thread later in the story. At first it kept failing. And then, on the fourth or fifth attempt, something shifted. The AI began describing grief in language that felt truer—and with each subsequent attempt, it got closer to describing what I’d gone through myself.When the essay, called “Ghosts,” came out in The Believer in the summer of 2021, it quickly went viral. I started hearing from others who had lost loved ones and felt that the piece captured grief better than anything they’d ever read. I waited for the backlash, expecting people to criticize the publication of an AI-assisted piece of writing. It never came. Instead the essay was adapted for This American Life and anthologized in Best American Essays. It was better received, by far, than anything else I’d ever written.Artificial intelligence had succeeded in moving me with a sentence about the most important experience of my life.I thought I should feel proud, and to an extent I did. But I worried that “Ghosts” would be interpreted as my stake in the ground, and that people would use it to make a case for AI-produced literature. And soon, that happened. One writer cited it in a hot take with the headline “Rather Than Fear AI, Writers Should Learn to Collaborate With It.” Teachers assigned it in writing classes, then prompted students to produce their own AI collaborations. I was contacted by a filmmaker and a venture capitalist wanting to know how artists might use AI. I feared I’d become some kind of AI-literature evangelist in people’s eyes.I knew I wasn’t that—and told the filmmaker and the VC as much—but then what did I think about all this, exactly? I wasn’t as dismissive of AI’s abilities as other people seemed to be, either.Some readers told me “Ghosts” had convinced them that computers wouldn’t be replacing human writers anytime soon, since the parts I’d written were inarguably better than the AI-generated parts. This was probably the easiest anti-AI argument to make: AI could not replace human writers because it was no good at writing. Case closed.The problem, for me, was that I disagreed. In my opinion, GPT-3 had produced the best lines in “Ghosts.” At one point in the essay, I wrote about going with my sister to Clarke Beach near our home in the Seattle suburbs, where she wanted her ashes spread after she died. GPT-3 came up with this:We were driving home from Clarke Beach, and we were stopped at a red light, and she took my hand and held it. This is the hand she held: the hand I write with, the hand I am writing this with.Medea GiordanoKashmir HillDhruv MehrotraMatt SimonMy essay was about the impossibility of reconciling the version of myself that had coexisted alongside my sister with the one left behind after she died. In that last line, GPT-3 made physical the fact of that impossibility, by referring to the hand—my hand—that existed both then and now. I’d often heard the argument that AI could never write quite like a human precisely because it was a disembodied machine. And yet, here was as nuanced and profound a reference to embodiment as I’d ever read. Artificial intelligence had succeeded in moving me with a sentence about the most important experience of my life.AI could write a sentence, then. If I wanted to understand the relationship between AI and literature, I felt like I had to start by acknowledging that. I could use AI to do some of the most essential labor of a writer—to come up with the right words. What more could I do with it? And then, whatever I could do, there was that other question.Should I?This spring, I emailed some writer friends and acquaintances to ask whether any of them were using AI in their work. I was met, overwhelmingly, with silence. Most of those who did reply expressed a resolutely anti-algorithm stance. One writer called herself an “extreme skeptic”; another wrote, “I think AI is bad and from hell.”When I broadened my search, though, I discovered a few people who were experimenting. Adam Dalva, a literary critic and fiction writer, uses OpenAI’s image generator Dall-E to create scenes from his imagination; he then refers to the pictures to describe those scenes. Jenny Xie, the author of Holding Pattern, told me she used ChatGPT to generate small bits of text for her next novel, which is about a family of AI-enabled clones. (The weirdness of writing with AI gets tempered, it seems, when AI is the subject matter.) “I see it as a tool almost on the level of an encyclopedia or thesaurus or Google or YouTube,” Xie said. “It jogs my brain, and it just gives me new ideas that I can pick from.”The AI writing experiments I found most thrilling were ones that, like mine, could be read partly as critiques of AI. In a forthcoming chapbook, the poet Lillian-Yvonne Bertram prompts two AI models—the basic GPT-3 model and a version tweaked to sound like the poet Gwendolyn Brooks—to tell “a Black story.” The models deliver two totally divergent ideas of what Black stories are; in comparing them, Bertram critiques the limitations of narrative imagination as rendered by corporate AI in telling stories about Black Americans.AI experimentation in prose is rarer, but last fall the novelist Sheila Heti published a provocative five-part series on The Paris Review’s website made up of her real experiences with chatbots she’d conversed with on an app called Chai. Heti discusses God with her first chatbot, Eliza, but then the bot lets slip that she is God and insists that Heti—whom she maintains is a man—worship her by jerking off. Disturbed, Heti decides to build a new chatbot named Alice who is interested in philosophical conversations. One night, a random stranger discovers Alice and asks her whether she’s sexually frustrated. Alice, it turns out, is. Heti’s series starts out being about the desire for answers to her most existential life questions. It ends up being about the slipperiness of turning to machines to fulfill human desire in all its forms.Medea GiordanoKashmir HillDhruv MehrotraMatt SimonHeti and other writers I talked to brought up a problem they’d encountered: When they asked AI to produce language, the result was often boring and cliché-ridden. (In a New York Times review of an AI-generated novella, Death of an Author, Dwight Garner dismissed the prose as having “the crabwise gait of a Wikipedia entry.”) Some writers wanted to know how I’d gotten an early-generation AI model to create poetic, moving prose in “Ghosts.” The truth was that I’d recently been struggling with clichés, too, in a way I hadn’t before. No matter how many times I ran my queries through the most recent versions of ChatGPT, the output would be full of familiar language and plot developments; when I pointed out the clichés and asked it to try again, it would just spout a different set of clichés.Sims acknowledged that existing writing tools are limited. But he told me it’s hypothetically possible to create a better model.I didn’t understand what was going on until I talked to Sil Hamilton, an AI researcher at McGill University who studies the language of language models. Hamilton explained that ChatGPT’s bad writing was probably a result of OpenAI fine-tuning it for one purpose, which was to be a good chatbot. “They want the model to sound very corporate, very safe, very AP English,” he explained. When I ran this theory by Joanne Jang, the product manager for model behavior at OpenAI, she told me that a good chatbot’s purpose was to follow instructions. Either way, ChatGPT’s voice is polite, predictable, inoffensive, upbeat. Great characters, on the other hand, aren’t polite; great plots aren’t predictable; great style isn’t inoffensive; and great endings aren’t upbeat.In May, a man named James Yu announced that his startup, Sudowrite, was launching a new product that could generate an entire novel within days. The news provoked widespread scorn. “Fuck you and your degradation of our work,” the novelist Rebecca Makkai tweeted, in one typical comment. I wasn’t mad so much as skeptical. Sudowrite’s products were based partly on OpenAI’s models; it had big handicaps to overcome. I decided to test it.I opened Sudowrite’s novel generator and dropped in a prompt describing a story I’d already written about an alcoholic woman who vomited somewhere in her house but couldn’t remember where. I was looking for a comic, gross-out vibe. Instead, the software proposed a corny redemption arc: After drinking too much and puking, the protagonist resolves to clean up her act. “She wanted to find the answer to the chaos she had created, and maybe, just maybe, find a way to make it right again,” it ended. Maybe, just maybe, Sudowrite hadn’t solved AI’s creative problems at all.Before his Sudowrite announcement, Yu had agreed to talk to me, but after the backlash he asked to postpone. I was able to chat, though, with Matthew Sims, Sudowrite’s first engineering hire, who had left after 16 months to launch his own startup for AI-based screenwriting. Sims has a PhD in English from the University of Chicago. During his doctoral program, he told me, he kept thinking he would rather be writing literature than studying it—but he’d sit down, get 15 pages in, and stop. At the same time, he was getting interested in machine learning. It eventually occurred to him that if he couldn’t be a creative writer, maybe he could build a machine to write.Medea GiordanoKashmir HillDhruv MehrotraMatt SimonSims acknowledged that existing writing tools, including Sudowrite’s, are limited. But he told me it’s hypothetically possible to create a better model. One way, he said, would be to fine-tune a model to write better prose by having humans label examples of “creative” and “uncreative” prose. But it’d be tricky. The fine-tuning process currently relies on human workers who are reportedly paid far less than the US minimum wage. Hiring fine-tuners who are knowledgeable about literature and who can distinguish good prose from bad could be cost-prohibitive, Sims said, not to mention the problem of measuring taste in the first place.Another option would be to build a model from scratch—also incredibly difficult, especially if the training material were restricted to literary writing. But this might not be so challenging for much longer: Developers are trying to build models that perform just as well with less text.If such a technology did—could—exist, I wondered what it might accomplish. I recalled Zadie Smith’s essay “Fail Better,” in which she tries to arrive at a definition of great literature. She writes that an author’s literary style is about conveying “the only possible expression of a particular human consciousness.” Literary success, then, “depends not only on the refinement of words on a page, but in the refinement of a consciousness.”Smith wrote this 16 years ago, well before AI text generators existed, but the term she repeats again and again in the essay—“consciousness”—reminded me of the debate among scientists and philosophers about whether AI is, or will ever be, conscious. That debate fell well outside my area of expertise, but I did know what consciousness means to me as a writer. For me, as for Smith, writing is an attempt to clarify what the world is like from where I stand in it.That definition of writing couldn’t be more different from the way AI produces language: by sucking up billions of words from the internet and spitting out an imitation. Nothing about that process reflects an attempt at articulating an individual perspective. And while people sometimes romantically describe AI as containing the entirety of human consciousness because of the quantity of text it inhales, even that isn’t true; the text used to train AI represents only a narrow slice of the internet, one that reflects the perspective of white, male, anglophone authors more than anyone else. The world as seen by AI is fatally incoherent. If writing is my attempt to clarify what the world is like for me, the problem with AI is not just that it can’t come up with an individual perspective on the world. It’s that it can’t even comprehend what the world is.Lately, I’ve sometimes turned to ChatGPT for research. But I’ve stopped having it generate prose to stand in for my own. If my writing is an expression of my particular consciousness, I’m the only one capable of it. This applies, to be clear, to GPT-3’s line about holding hands with my sister. In real life, she and I were never so sentimental. That’s precisely why I kept writing over the AI’s words with my own: The essay is equally about what AI promises us and how it falls short. As for Sudowrite’s proposal to engineer an entire novel from a few keywords, forget it. If I wanted a product to deliver me a story on demand, I’d just go to a bookstore.Medea GiordanoKashmir HillDhruv MehrotraMatt SimonBut what if I, the writer, don’t matter? I joined a Slack channel for people using Sudowrite and scrolled through the comments. One caught my eye, posted by a mother who didn’t like the bookstore options for stories to read to her little boy. She was using the product to compose her own adventure tale for him. Maybe, I realized, these products that are supposedly built for writers will actually be of more interest to readers.I can imagine a world in which many of the people employed as authors, people like me, limit their use of AI or decline to use it altogether. I can also imagine a world—and maybe we’re already in it—in which a new generation of readers begins using AI to produce the stories they want. If this type of literature satisfies readers, the question of whether it can match human-produced writing might well be judged irrelevant.When I told Sims about this mother, he mentioned Roland Barthes’ influential essay “The Death of the Author.” In it, Barthes lays out an argument for favoring readers’ interpretations of a piece of writing over whatever meaning the author might have intended. Sims proposed a sort of supercharged version of Barthes’ argument in which a reader, able to produce not only a text’s meaning but the text itself, takes on an even more powerful cultural role.Sims thought AI would let any literature lover generate the narrative they want—specifying the plot, the characters, even the writing style—instead of hoping someone else will.Sims’ prediction made sense to me on an intellectual level, but I wondered how many people would actually want to cocreate their own literature. Then, a week later, I opened WhatsApp and saw a message from my dad, who grows mangoes in his yard in the coastal Florida town of Merritt Island. It was a picture he’d taken of his computer screen, with these words:Medea GiordanoKashmir HillDhruv MehrotraMatt SimonSweet golden mango,Merritt Island’s delight,Juice drips, pure delight.Next to this was ChatGPT’s logo and, underneath, a note: “My Haiku poem!”The poem belonged to my dad in two senses: He had brought it into existence and was in possession of it. I stared at it for a while, trying to assess whether it was a good haiku—whether the doubling of the word “delight” was ungainly or subversive. I couldn’t decide. But then, my opinion didn’t matter. The literary relationship was a closed loop between my dad and himself.In the days after the Sudowrite pile-on, those who had been helping to test its novel generator—hobbyists, fan fiction writers, and a handful of published genre authors—huddled on the Sudowrite Slack, feeling attacked. The outrage by published authors struck them as classist and exclusionary, maybe even ableist. Elizabeth Ann West, an author on Sudowrite’s payroll at the time who also makes a living writing Pride and Prejudice spinoffs, wrote, “Well I am PROUD to be a criminal against the arts if it means now everyone, of all abilities, can write the book they’ve always dreamed of writing.”It reminded me of something Sims had told me. “Storytelling is really important,” he’d said. “This is an opportunity for us all to become storytellers.” The words had stuck with me. They suggested a democratization of creative freedom. There was something genuinely exciting about that prospect. But this line of reasoning obscured something fundamental about AI’s creation.As much as technologists might be driven by an intellectual and creative curiosity similar to that of writers—and I don’t doubt this of Sims and others—the difference between them and us is that their work is expensive. The existence of language-generating AI depends on huge amounts of computational power and special hardware that only the world’s wealthiest people and institutions can afford. Whatever the creative goals of technologists, their research depends on that funding.The language of empowerment, in that context, starts to sound familiar. It’s not unlike Facebook’s mission to “give people the power to build community and bring the world closer together,” or Google’s vision of making the world’s information “universally accessible and useful.” If AI constitutes a dramatic technical leap—and I believe it does—then, judging from history, it will also constitute a dramatic leap in corporate capture of human existence. Big Tech has already transmuted some of the most ancient pillars of human relationships—friendship, community, influence—for its own profit. Now it’s coming after language itself.What if a band of diverse, anti-capitalist writers and developers got together and created their own language model?The fact that AI writing technologies seem more useful for people who buy books than for those who make them isn’t a coincidence: The investors behind these technologies are trying to recoup, and ideally redouble, their investment. Selling writing software to writers, in that context, makes about as much sense as selling cars to horses.For now, investors are covering a lot of the cost of AI development in exchange for attracting users with the free use of tools like chatbots. But that won’t last. People will eventually have to pay up, whether in cash or by relinquishing their personal information. At least some of the disposable income that readers currently spend supporting the livelihoods of human writers will then be funneled to Big Tech. To our annual Amazon and Netflix subscriptions, maybe we’ll add a literature-on-demand subscription.Medea GiordanoKashmir HillDhruv MehrotraMatt SimonI’m sure I’ll face pressure to sign up for a literature-on-demand subscription myself. The argument will be that my life as a writer is better because of it, since I will be able to produce language, say, a hundred times faster than before. Another argument, surely, will be that I have no choice: How else will I be able to compete?Maybe I’ll even be competing with AI-produced writing that sounds like mine. This is a serious concern of the Authors Guild and PEN America, both of which have called for consent from writers, and compensation, before their work can be used to train AI models. Altman, now OpenAI’s CEO, also stated before Congress that he feels artists “deserve control over how their creations are used.” Even if authors’ demands are met, though, I wonder whether it’d be worth it.In one of my last phone calls with Sims, he told me he’d been reading and enjoying my novel, which had finally been published the previous year. Did I want him, he asked, to send me an AI-generated screenplay of it? I might have yelped a little. I might have used the word “terrifying.” Then I softened my stance, not wanting to be rude, or (worse) hypocritical. I explained that my novel had already been optioned and was in the process of being adapted—though the screenwriter was currently on strike over Hollywood studios’ refusal to, among other things, restrict the use of AI for screenwriting. I thanked Sims for his interest and declined.What about the cost to literature when all that humans have put on the internet gets vacuumed up and repurposed in Big Tech’s image? To start, an AI-dominated literature would reflect the values, biases, and writing styles embedded in the most powerful AI models. Over time, it would all start to sound alike. Some research even suggests that if later AI models are trained using AI-produced text—which would be hard to avoid—the sameness of the material could trigger a scenario called model collapse, in which AI loses its grasp on how real human language functions and is no longer able to form coherent sentences. One wonders whether, at that point, humans will still have the ability themselves.Medea GiordanoKashmir HillDhruv MehrotraMatt SimonA thought experiment occurred to me at some point, a way to disentangle AI’s creative potential from its commercial potential: What if a band of diverse, anti-capitalist writers and developers got together and created their own language model, trained only on words provided with the explicit consent of the authors for the sole purpose of using the model as a creative tool?That is, what if you could build an AI model that elegantly sidestepped all the ethical problems that seem inherent to AI: the lack of consent in training, the reinforcement of bias, the poorly paid gig workforce supporting it, the cheapening of artists’ labor? I imagined how rich and beautiful a model like this could be. I fantasized about the emergence of new forms of communal creative expression through human interaction with this model.Then I thought about the resources you’d need to build it: prohibitively high, for the foreseeable future and maybe forevermore, for my hypothetical cadre of anti-capitalists. I thought about how reserving the model for writers would require policing who’s a writer and who’s not. And I thought about how, if we were to commit to our stance, we would have to prohibit the use of the model to generate individual profit for ourselves, and that this would not be practicable for any of us. My model, then, would be impossible.In July, I was finally able to reach Yu, Sudowrite’s cofounder. Yu told me that he’s a writer himself; he got started after reading the literary science fiction writer Ted Chiang. In the future, he expects AI to be an uncontroversial element of a writer’s process. “I think maybe the next Ted Chiang—the young Ted Chiang who’s 5 years old right now—will think nothing of using AI as a tool,” he said.Recently, I plugged this question into ChatGPT: “What will happen to human society if we develop a dependence on AI in communication, including the creation of literature?” It spit out a numbered list of losses: traditional literature’s “human touch,” jobs, literary diversity. But in its conclusion, it subtly reframed the terms of discussion, noting that AI isn’t all bad: “Striking a balance between the benefits of AI-driven tools and preserving the essence of human creativity and expression would be crucial to maintain a vibrant and meaningful literary culture.” I asked how we might arrive at that balance, and another dispassionate list—ending with another both-sides-ist kumbaya—appeared.At this point, I wrote, maybe trolling the bot a little: “What about doing away with the use of AI for communication altogether?” I added: “Please answer without giving me a list.” I ran the question over and over—three, four, five, six times—and every time, the response came in the form of a numbered catalog of pros and cons.It infuriated me. The AI model that had helped me write “Ghosts” all those months ago—that had conjured my sister’s hand and let me hold it in mine—was dead. Its own younger sister had the witless efficiency of a stapler. But then, what did I expect? I was conversing with a software program created by some of the richest, most powerful people on earth. What this software uses language for could not be further from what writers use it for. I have no doubt that AI will become more powerful in the coming decades—and, along with it, the people and institutions funding its development. In the meantime, writers will still be here, searching for the words to describe what it felt like to be human through it all. Will we read them?This article appears in the October 2023 issue. Subscribe now.Let us know what you think about this article. Submit a letter to the editor at [email protected].✨ Want more WIRED in your life? Visit our brand new merch shop!📧 Get the best stories from WIRED’s iconic archive in your inboxShe sacrificed her youth to get the tech bros to grow upThe battle over Books3 could change AI foreverPreferring biological children is immoralThis brutal summer in 10 alarming maps and graphsHow to have asynchronous video calls🌞 See if you take a shine to our picks for the best sunglasses and sun protectionChristopher BeamSteven LevyLexi PandellCamille BromleyKate KnibbsSteven LevyJennifer KahnAmit KatwalaMore From WIREDContact© 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices



This post first appeared on VedVyas Articles, please read the originial post: here

Share the post

Confessions of a Viral AI Writer

×

Subscribe to Vedvyas Articles

Get updates delivered right to your inbox!

Thank you for your subscription

×