BBC BLOGS - Today: Tom Feilden
« Previous|Main|Next »

Can you teach robots right from wrong?

Tom Feilden|09:31 UK time, Wednesday, 3 December 2008

robot.jpg"Look, I'm not stupid you know. They can't make things like that yet."

"Not yet. Not for about 40 years."

When I first saw the film The Terminator in 1984 this exchange between the heroine Sarah Connor and the time travelling soldier Kyle Reese, elicited something of a knowing laugh from the audience. Clearly the film makers were having some fun with the concept of time travel.

Thankfully the star of the film, a fully autonomous (and relentlessly homicidal) cyborg, is still the stuff of science fiction, but would anyone now argue with the timescale?

It sounds incredible, but the US military currently has more than 5,000 robots deployed in Iraq, many of them armed. While Predator and Reaper drones patrol the skies over Afghanistan equipped with Hellfire missiles, mine clearance and bomb disposal is already routinely performed using semi-autonomous robots.

It won't be long before supplies are delivered to the front line by robotic vehicles, and more ominously, the US military recently took delivery of a ground-based robotic system that can fire everything from pepper spray to grenades and a heavy calibre machine gun.

Although these are sophisticated machines they are still only semi-autonomous. Even though a Predator or Reaper drone can fly itself for long periods during a mission and identify potential targets, the order to fire its missiles still comes from mission control in a bunker in Nevada.

But a huge amount of money is being invested in reducing even this level of human involvement. By 2010 it's estimated the US defence research agency, DARPA, will have invested more than $4bn in "autonomous systems". Robots that can decide for themselves who is an enemy combatant, and whether to kill.

Given that aim, attention is now turning to questions of ethics: Is it possible to design a robot that can tell right from wrong? A moral machine that would observe the laws of war?

In a recent report written for the US Army one of the leading figures in the field, the computer scientist at Georgia Tech Ronald Arkin, concludes it may be possible. While not "perfectly ethical on the battlefield" robots could "perform more ethically than human soldiers". That's because robots don't need to worry about protecting themselves, and their judgement isn't clouded by anger, frustration or a desire for vengence.

It's a prospect the British robotics expert Professor Noel Sharkey finds, frankly, terrifying. Leaving aside the (massive) information processing problems associated with the confusing and rapidly changing conditions of combat, he says, do we really want cold calculating machines taking life or death decisions?

Emotion, he argues, (and particularly compassion) is a crucial component in the decision to fire. Otherwise, he says, what we're left with is a glorified parking attendant fastidiously and ruthlessly implementing "the rules".

As Kyle Reese says of the Terminator: "It's what it does. It's all it does. And it absolutely will not stop."

Comments

BBC © 2014The BBC is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.