Soapbox Post

January 28, 2010
Filed under War, Military

These days, they say, military personnel in Virginia or Nevada make decisions about whether to launch predator-based missiles against specific targets thousands of miles away in Afghanistan and Pakistan, an extraordinary distancing of the fighter from the target.  The next technological step, an incremental one at that, is war machines that are capable of making decisions themselves about whether to launch a missile at a target.  Noel Sharkey, a professor of artificial intelligence in the UK, observes: “This is dangerous new territory for warfare, yet there are no new ethical codes or guidelines in place. I have worked in artificial intelligence for decades, and the idea of a robot making decisions about human termination is terrifying.”  It seems that technological innovation is bringing us to an entirely new world of organized violence where killing is totally insulated from any notions of human agency or moral accountability.

 

Either that, or nothing much has changed.  On page three of Nicholson Baker’s Human Smoke, a retelling of the run-up to World War II, one encounters this observation from clergyman Harry Emerson Fosdick, writing in 1917:  “War is now dropping bombs from aeroplanes and killing women and children in their beds; it is shooting by telephonic orders, at an unseen place miles away and slaughtering invisible men.”  

 

The idea of technological progress seems matched by a continual rediscovery of the idea that we are only now crossing into some new terrain of lost innocence. 

 

 

About the Author:  Daniel Sarewitz is the co-director of CSPO.

Comments
Clark Miller
Feb 13, 2010 @ 12:07am
Having heard Arkin speak several times now and had the chance to put questions to him about his proposal, I have to say that he has an unshakeable faith in the ability of technology to function as designed in the middle of the battlefield, despite being dropped, banged into, electrocuted, shot, or otherwise having encountered material realities that differ radically from the laboratory. And that's only if you buy his basic argument that the fundamental question is whether the robot can make better ethical decisions than current soldiers do. Since, in practice, we evaluate machine behavior against very different standards than we do our fellow human beings, I doubt that's the right criteria. In fact, I suspect we'll demand much higher performance criteria from robots with the power to kill, and that will doom any chance that his robots will be able to be considered the ethically superior option. On the other hand, they'll likely be far more efficient killers, and their destruction will count for far less than soldiers, so I have no doubt we'll be sending them by the thousands onto the battlefield.

By the way, if you want to see what happens to real technological systems without humans in the loop, so-to-speak, when you put them on the battlefield, read Paul Edwards's account of automated bombing in Vietnam in The Closed World. Basically, the systems screwed up close to 100% of the time.
Jamey Wetmore
Jan 28, 2010 @ 5:29pm
I completely agree that the development and deployment of robots in wartime introduces a number of new ethical questions that we don't know the answers to. But there is an interesting argument put forth by Ronald Arkin at Georgia Tech. He posits the idea that humans behave rather badly in wartime and perhaps robots could do a better job on the ground of following the rules of war and ultimately result in better confrontations. Of course having robots to do our fighting for us will certainly decrease our hesitation to proclaim war which is a significant problem in and of itself. Regardless, Arkin's argument is intriguing and worth looking at. He wrote a good article summing it up in IEEE's Technology and Society Magazine and it's available free at: http://www.ieeessit.org/technology_and_society/free_sample_article.asp?ArticleID=15
Sorry! Comments have been automatically turned off for this post. Comments are automatically turned off 360 days after being published.
 


Privacy Policy . Copyright 2013 . Arizona State University
Consortium for Science, Policy & Outcomes
College of Liberal Arts and Sciences
PO Box 875603, Tempe AZ 85287-5603, Phone: 480-727-8787, Fax: 480-727-8791
cspo@asu.edu