Saturday, January 19, 2008

Can a Robot Commit War Crimes?

The question has already been broached. And it has led to a session called 'When Robots Commit War Crimes: Autonomous Weapons and Human Responsibility,' at Stanford University's Technology in Wartime conference. The io9 blog -- whose blogger is also an organizer of the conference -- describes the question like this:

Now that the military is using autonomous surveillance/combat robots created by iRobot, the company behind the Roomba robot vacuum, a strange question emerges: What do we do if a robot commits a war crime? This isn't idle speculation. An automated anti-aircraft cannon's friendly fire killed nine soldiers in South Africa last year, and computer scientists speculate that as more weapons (and aircraft) are robot-controlled that we'll need to develop new definitions of war crimes. In fact, the possibility of robot war crimes is the subject of a panel at an upcoming conference at Stanford.

One of the participants in the panel has written about a study he's conducting to help better understand the ethical questions associated with the use of robots in warfare:

If the military keeps moving forward at its current rapid pace towards the deployment of intelligent autonomous robots, we must ensure that these systems be deployed ethically, in a manner consistent with standing protocols and other ethical constraints that draw from cultural relativism (our own society’s or the world’s ethical perspectives), deontology (right-based approaches), or within other related ethical frameworks.

We need to avoid mistakes like this guy:



Perhaps if all robots were programmed with three laws...

No comments: