Can you teach robots right from wrong?
"Look, I'm not stupid you know. They can't make things like that yet."
"Not yet. Not for about 40 years."
When I first saw the film The Terminator in 1984 this exchange between the heroine Sarah Connor and the time travelling soldier Kyle Reese, elicited something of a knowing laugh from the audience. Clearly the film makers were having some fun with the concept of time travel.
Thankfully the star of the film, a fully autonomous (and relentlessly homicidal) cyborg, is still the stuff of science fiction, but would anyone now argue with the timescale?
It sounds incredible, but the US military currently has more than 5,000 robots deployed in Iraq, many of them armed. While Predator and Reaper drones patrol the skies over Afghanistan equipped with Hellfire missiles, mine clearance and bomb disposal is already routinely performed using semi-autonomous robots.
It won't be long before supplies are delivered to the front line by robotic vehicles, and more ominously, the US military recently took delivery of a ground-based robotic system that can fire everything from pepper spray to grenades and a heavy calibre machine gun.
Although these are sophisticated machines they are still only semi-autonomous. Even though a Predator or Reaper drone can fly itself for long periods during a mission and identify potential targets, the order to fire its missiles still comes from mission control in a bunker in Nevada.
But a huge amount of money is being invested in reducing even this level of human involvement. By 2010 it's estimated the US defence research agency, DARPA, will have invested more than $4bn in "autonomous systems". Robots that can decide for themselves who is an enemy combatant, and whether to kill.
Given that aim, attention is now turning to questions of ethics: Is it possible to design a robot that can tell right from wrong? A moral machine that would observe the laws of war?
In a recent report written for the US Army one of the leading figures in the field, the computer scientist at Georgia Tech Ronald Arkin, concludes it may be possible. While not "perfectly ethical on the battlefield" robots could "perform more ethically than human soldiers". That's because robots don't need to worry about protecting themselves, and their judgement isn't clouded by anger, frustration or a desire for vengence.
It's a prospect the British robotics expert Professor Noel Sharkey finds, frankly, terrifying. Leaving aside the (massive) information processing problems associated with the confusing and rapidly changing conditions of combat, he says, do we really want cold calculating machines taking life or death decisions?
Emotion, he argues, (and particularly compassion) is a crucial component in the decision to fire. Otherwise, he says, what we're left with is a glorified parking attendant fastidiously and ruthlessly implementing "the rules".
As Kyle Reese says of the Terminator: "It's what it does. It's all it does. And it absolutely will not stop."

I'm Tom Feilden and I'm the science correspondent on the Today programme. This is where we can talk about the scientific issues we're covering on the programme.
Comment number 1.
At 15:49 3rd Dec 2008, Pot_Kettle wrote:Whoa good peice
Now I am truly scared for the future.
Complain about this comment (Comment number 1)
Comment number 2.
At 17:19 3rd Dec 2008, Me wrote:Splendid. The nation with the lowest morals when it comes to starting wars is now on the verge of having it's own army of robots programmed to think the same way.
The end of the world for humans has just got very much closer.
Complain about this comment (Comment number 2)
Comment number 3.
At 17:21 3rd Dec 2008, dyrewolfe wrote:Reminds me of the Masters of Science Fiction episode "Watchbird", starring Sean "Samwise Gamgee" Astin.
https://www.mastersofscifi.com/site/masters_of_science_fiction/episodes/watchbird.html
Complain about this comment (Comment number 3)
Comment number 4.
At 19:54 3rd Dec 2008, Timecircle wrote:Who is going to be allowed to decide the criteria for right-ness and wrong-ness in any given situation? Anyone wise enough to be trusted with the task would say it was impossible. In comparison with that little problem, programming the robot will be easy.
Complain about this comment (Comment number 4)
Comment number 5.
At 01:59 4th Dec 2008, Dusty_Matter wrote:Right from wrong is not what this is really about. It's about killing people without getting your own hands bloody. It's about relieving your own moral conscience so that you don't have to watch the person whom you are having killed. You can now have someone killed as easily as blowing your nose, and you don't even have to feel bad about it. How moral.
Complain about this comment (Comment number 5)
Comment number 6.
At 21:42 26th Jan 2009, softbass wrote:Can you teach robots right from wrong?
Yes, if you can make them feel pain (e.g guilt). That's not going to happen though!
Complain about this comment (Comment number 6)