News7News 7
HomeScience and NatureHow New Science Fiction Could Help Us Improve AI

How New Science Fiction Could Help Us Improve AI

by News7

For the past decade, a group called the Future of Life Institute has been campaigning for human welfare in public conversations around nuclear weapons, climate change, artificial intelligence and other evolving threats. The nonprofit organization aims to steer technological development away from the dystopian visions that so frequently haunt media. But when it comes to discussions about artificial intelligence, its team has had to face one especially persistent foe: the Terminator.

“When we first started talking about AI risk, every article that came out about our work had a Terminator in it,” says Emilia Javorsky, director of the institute’s Futures program. The Terminator film franchise’s specter of a powerful and antagonistic robot that is driven only by ruthless logic is hard to dispel. Ask people to imagine a powerful artificial intelligence, and they tend to think of the fictional archetype of a machine with a “Machiavellian soul,” Javorsky adds—even though actual AI systems inherently “have no malevolence, no human intent to them whatsoever.”

Recognizing the influence that popular narratives have on our collective perceptions, a growing number of AI and computer science experts now want to harness fiction to help imagine futures in which algorithms don’t destroy the planet. The arts and humanities, they argue, must play a role to ensure AI serves human goals. To that end, Nina Beguš, an AI researcher at the University of California, Berkeley, advocates for a new discipline that she calls the “artificial humanities.” In her upcoming book Artificial Humanities: A Fictional Perspective on Language in AI, she contends that the “responsibility of making these technologies is too big for the technologists to bear it alone.” The artificial humanities, she explains, would fuse science and the arts to leverage fiction and philosophy in the exploration of AI’s benevolent potential.

On supporting science journalismIf you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

“The humanities simply have to be part of the conversation, or this new world advances without our input,” says cultural historian Catherine Clarke of the University of London, who has studied the intersection of literature and AI.

Entertainment strongly shapes people’s perceptions of AI, as a recent public opinion study by researchers at the University of Texas at Austin shows. These depictions, however, frequently ignore positive technological potential in favor of portraying our worst fears. “We need fictional works that consider machines for what they are and articulate what their intelligence and creativity could be,” Beguš says. And because fiction is “not obliged to mirror actual technological developments,” it can be a “public space for experimentation and reflection.”

Importantly, it also turns out that our entertainment-fueled negative impressions of AI can, in turn, influence how the technology performs in the real world; the stories we tell ourselves about AI prime us to use it in certain ways. Preconceptions that an AI chatbot will answer like a manipulative machine initiate a hostile feedback loop so that the bot acts as expected, according to a recent study by researchers at the Massachusetts Institute of Technology Media Lab. A user’s internalized fears can be self-fulfilling, seasoning an algorithm with adversarial ingredients. So it may be that if fiction trains us to expect the worst from AI, that’s exactly what we’ll get.

But if we treat AI models with some finesse, they will respond in kind. Clarke, along with Murray Shanahan of Imperial College London and Google DeepMind, recently sought to determine whether a text-generating AI could be coached to deliver human-quality prose. They provided the beginning of a story to a chatbot and used prompts of varying detail and complexity to ask it to complete the narrative. As their preprint results found, stories composed by an AI that was given crude prompts fell flat, but more elegant and creatively refined prompts led to more literary prose. This suggests that what we give to a generative AI is returned to us.

“Why do we always imagine science fiction to be a dystopia? Why can’t we imagine science fiction that gives us hope?” —Pat Pataranutaporn, M.I.T. Media Lab

If these patterns hold true for more intelligent forms of AI, we need to instill them with scruples before we flip their “on” switches. The University of Oxford’s AI doomsayer Nick Bostrom has called this need “philosophy with a deadline.”

To pull more artists and thinkers into that discussion, the Future of Life Institute has sponsored multiple initiatives linking fiction writers and other creatives with technologists. “You can’t mitigate risks that you can’t imagine,” Javorsky says. “You also can’t build positive futures with technology and steer toward those if you’re not imagining them.” The institute’s Worldbuilding Competition, for example, brings together multidisciplinary teams to conceptualize various friendly-AI futures. Those imagined tomorrows include a world in which a centralized AI manages the equitable distribution of goods. A second scenario suggests a system of digital nations that are free of geographic bounds. In yet another, artificial governance programs advocate for peace. In a fourth, AI helps us achieve a more inclusive society.

Merely imagining such worlds, where growth and innovation no longer depend on conventional human labor, allows fiction writers and other thinkers to ask provocative questions, Javorsky says: “What does meaning look like? What does aspiration look like? How do we rethink human purpose and agency in a world of shared abundance?”

The Future of Life Institute has also joined forces with an organization called Hollywood, Health & Society and other organizations to form the Blue Sky Scriptwriting Contest, which awards writers for creating television scripts that depict fair and equitable applications for artificial intelligence.

“We’ve all seen lots of dystopian and postapocalyptic futures in popular entertainment,” says Hollywood, Health & Society’s program director Kate Langrall Folb. There are “very few depictions of a greener, safer, more just future.” The inaugural contest was held in 2022, with prizes awarded last year. In that competition, the winning entry was set in a town where AI equally serves the needs of all residents, who are shaken when a once-in-a-generation murder complicates their potential techno-utopia. In another, AI-powered advisers equipped with Indigenous wisdom support a more sustainable society. Another tells of an Earth where AI has moved all manufacturing and heavy infrastructure off-planet, regenerating the terrestrial ecosystems below.

To further inspire these lines of thinking, the Future of Life Institute is in the process of producing a free, publicly available “Worldbuilding” course to train participants in hope rather than doom when it comes to AI. And once a person has managed to escape the doom loop, Javorsky says, it can be difficult to know where to direct efforts at developing positive AI. To address this, the institute is developing detailed scenario maps that suggest where different trajectories and decision points could lead this technology over the long run. The intention is to bring these scenarios to creative, artistic people who will then flesh out these stories, pursuing the crossover between technology and creativity—and providing AI developers with ideas about where different courses of action may take us.

This moment desperately needs “the power of storytelling and the humanities,” Javorsky says, to steer people away from the Terminator and toward a future where they’d be excited to live alongside AI—in peace and felicity.

“We need to come up with a new story,” says Pat Pataranutaporn, a researcher at the M.I.T. Media Lab and a co-author of the study on AI user preconceptions. “Why do we always imagine science fiction to be a dystopia? Why can’t we imagine science fiction that gives us hope?”

Source : Scientific American

You may also like