Our Final Invention

Artificial Intelligence and the End of the Human Era by James Barrat

Computers already make all sorts of decisions for you. With little or no human guidance, they deduce what books you would like to buy, trade your stocks and distribute electrical power. They do all this quickly and efficiently using a simple form of artificial intelligence. Now, imagine if computers controlled even more aspects of life and could truly think for themselves.

Barrat, a documentary filmmaker and author, chronicles his discussions with scientists and engineers who are developing ever more complex artificial intelligence, or AI. The goal of many in the field is to make a mechanical brain as intelligent — creative, flexible and capable of learning — as the human mind. But an increasing number of AI visionaries have misgivings.

Science fiction has long explored the implications of humanlike machines (think of Asimov’s I, Robot), but Barrat’s thoughtful treatment adds a dose of reality. Through his conversations with experts, he argues that the perils of AI can easily, even inevitably, outweigh its promise.

By mid-century — maybe within a decade, some researchers say — a computer may achieve human-scale artificial intelligence, an admittedly fuzzy milestone. (The Turing test provides one definition: a computer would pass the test by fooling humans into thinking it’s human.) AI could then quickly evolve to the point where it is thousands of times smarter than a human. But long before that, an AI robot or computer would become self-aware and would not be interested in remaining under human control, Barrat argues.

One AI researcher notes that self-aware, self-improving systems will have three motivations: efficiency, self-protection and acquisition of resources, primarily energy. Some people hesitate to even acknowledge the possible perils of this situation, believing that computers programmed to be superintelligent can also be programmed to be “friendly.” But others, including Barrat, fear that humans and AI are headed toward a mortal struggle. Intelligence isn’t unpredictable merely some of the time or in special cases, he writes. “Computer systems advanced enough to act with human-level intelligence will likely be unpredictable and inscrutable all of the time.”

Humans, he says, need to figure out now, at the early stages of AI’s creation, how to coexist with hyperintelligent machines. Otherwise, Barrat worries, we could end up with a planet — eventually a galaxy — populated by self-serving, self-replicating AI entities that act ruthlessly toward their creators.

Buy Book
Amazon.com links on the Science News website generate funds for Society for Science & the Public programs.

More Stories from Science News on Humans

From the Nature Index

Paid Content