Not the End of the World

Listening to the news right now, the end of the world is come at last. Not because of climate change, or incurable pandemics, or an enraged President Putin pressing the button.  This is something completely different.  Humanity will wink out of existence because of a word processor on steroids. ChatGPT, an artificial intelligence program that can be pre-trained to generate conversational responses, emails, essays and the like, is our harbinger of doom.

Even the straitlaced end of the media spectrum is full of stories and opinion pieces claiming that ChatGPT means we are heartbeats away from the point where AI will develop its own plans and goals without input from humans.  From there, according to the normally sober Johnathan Freedland in the Guardian, ’tis but a short step to a scenario whereby “an AI bent on a goal to which the existence of humans had become an obstacle, or even an inconvenience, could set out to kill all by itself.”  Humans will be wiped from the face of the Earth.  AI will rule all.

To which I say: relax.  Read some science fiction.

First, let’s remember that software programs do not have hands. Trite as that may sound, a genius AI bent on the destruction of humanity can’t do much on its own. It could spread a ton of disinformation (something we’ve become very good at doing on our own, thank you very much) but it would still need a human being to turn the launch keys of a nuclear missile. It’s one of the reasons I’ve always found the Terminator movies fun but not frightening. Skynet could always have been stymied. Even assuming a real-world Skynet had access to the various killing machines in the movies, communications could be cut off, instructions corrupted.  An individual terminator, like any drone, could be isolated, reprogrammed, or otherwise rendered harmless. Better yet, just pull the plug the moment Skynet acts up.  The movies magically made that impossible, but in the real world?  C’mon.  For mechanisms powered by electricity it’s not that hard.

Second, science fiction and science fiction writers have been living with AI for over a hundred years.  We know a thing or two.  Rossum’s Universal Robots, the Czech play that actually invented the word robot, was first performed in 1920.  Isaac Asimov’s various robot stories date from the 1940s (for non-SF readers, Asimov’s I, Robot is as good a place to start your literary journey as any), and Iain M. Banks’s Culture series, where machine intelligences run the galaxy and humans mostly kick back and have fun, ran from 1987 (Consider Phlebas) to 2012 (The Hydrogen Sonata). What a century of science fiction teaches us is that humans can live with artificial intelligence pretty easily. AI can be a tool, or a partner, or an independent entity that frees us up for other things.  It is not an unavoidable omen of our own destruction: at least no more than nuclear weapons, or global warming, or a shocking complacency about natural-born pathogens. Humans adapt. It’s what we do.

That said, we do have to get from the panicky present to the cool SF future. Real artificial intelligence will arrive at some point and the non-SF community needs to get comfortable with that.

Asimov’s solution to the threat of AI was his famous Three Laws of Robotics: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.  In Asimov’s universe, these parameters are built into the robot’s positronic brain. In the real world, too, laws and regulations will be a good part of the answer. When cars, slow and clumsy as they were, first rolled on to our roads, laws were passed forbidding them from travelling faster than walking pace. In fact, they had to be preceded by an actual human being walking along the highway with a red flag. Absurd now, but necessary then to calm the public panic brought about by world-changing technology.

AI must also have its own “red flag” laws. Maybe computing power is limited, or kill switches are built in, or access to the outside world is somehow curtailed. Probably all of this and more. Humans are actually pretty good at the laws and regulations stuff. We just need to calm down and get on with it. AI has already given us brand new antibiotics and new treatments for brain tumors. Let’s put in some guard rails and allow AI to change the world for the better. Cool stuff is coming!

Now that we’ve taken a beat, consider this: it may not be coincidental that all these end-of-the-world news stories focus on software that generates words.  Perhaps the fear behind the headlines lies in the fact that next year’s news could be written by AI instead of a hard-bitten journalist.   I say nothing about whether that is a good or bad thing, but a sensible early law might be one that forbids AI-generated content from passing itself off as human.

After all, do you really know who wrote this?