Human World

Are robot wars cause for real concern?

Can we prevent a Terminator-style robot wars from happening in the future?

This was not something that I used to worry about. And it’s not that I worry about it every day – I’d say robot wars are on the back burner of my brain. It’s just that the first time I watched Terminator, with its dreary yet action-packed vision of 2029, I thought the scenario was completely implausible. Now I kind of think it could happen. Because I’ve talked to futurists.

Both Ray Kurzweil and Nick Bostrom have made careers out of speculating about the future (Kurzweil is also a productive inventor), and I’ve had the good fortune of speaking to them both. They spoke of unlimited potential of technology in the future, allowing us drastically enhance the quality of life and lengthen our lifespans, while finding sustainable solutions to all of the problems that plague us now. But they also warned of the dark side to rapid technological advancement.

“Technology has always double edged sword. And that goes back to fire, and stone tools,” said Kurzweil. “For example, these biological technologies which could cure disease, extend our longevity, could enable bioterrorist to re-engineer a biological virus and turn it from a benign one to a deadly one.”

Bostrom said that nanotechnology (which he named as one of the two technologies most likely to destroy humanity), touted as the solution to many scientific problems, could create surveillance systems that keep despotic governments in power, or make weapons of mass destruction more powerful than the world has ever seen before. He said the other major risk to humanity is superintelligent machines, or computers that equal or surpass human intelligence. Once we create a superintelligent machine, the robots will be able to improve and replicate by themselves, possibly leading to – that’s right, the Terminator.

Kind of makes you want to dig a strongly reinforced hole, doesn’t it? It won’t work. Trust me, I asked. The important question is, will we be smart enough to stop this before it starts?

Bostrom is a philosopher by training, and he believes that before we race to awesome new heights of technology, we need to think about the ethical and moral implications. “A superintelligent machine can achieve whatever desire in the world it wants to produce,” he told me, “so it’s important that its goals are human friendly. But we might fail to provide it with a goal structure to preserve friendliness to humanity.”

Kurzweil echoed this idea. “We have to develop defensives against abuse of the technology, we have to create ethical guidelines so responsible scientists won’t create accidental problems,” he said. “And I think if we do those things it will have a very positive outcome.”

A commenter on one of Bostrom’s essays (incidentally, hosted on Kurzweil’s website), wrote, “Perhaps our power to invent and create will outstrip our power to predict and ameliorate. It wouldn’t be the first time. But we’ll never know unless we try.”

Sure, but we’ll never learn if we’re extinct. So I worry about robot wars in advance.

Posted 
July 27, 2009
 in 
Human World

Like what you read?
Subscribe and receive daily news delivered to your inbox.

Your email address will only be used for EarthSky content. Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

More from 

Lindsay Patterson

View All