These
days, they say, military personnel in Virginia or Nevada make decisions about
whether to launch predator-based missiles against specific targets thousands of
miles away in Afghanistan and Pakistan, an extraordinary distancing of the
fighter from the target. The next
technological step, an incremental one at that, is war machines that are
capable of making decisions themselves about whether to launch a missile at a
target. Noel Sharkey, a professor of
artificial intelligence in the UK, observes: “This is dangerous new territory for
warfare, yet there are no new ethical codes or guidelines in place. I have
worked in artificial intelligence for decades, and the idea of a robot making
decisions about human termination is terrifying.” It seems that technological innovation is
bringing us to an entirely new world of organized violence where killing is
totally insulated from any notions of human agency or moral accountability.
Either
that, or nothing much has changed. On
page three of Nicholson Baker’s Human
Smoke, a retelling of the run-up to World War II, one encounters this
observation from clergyman Harry Emerson Fosdick, writing in 1917: “War is now dropping bombs from aeroplanes
and killing women and children in their beds; it is shooting by telephonic
orders, at an unseen place miles away and slaughtering invisible men.”
The idea
of technological progress seems matched by a continual rediscovery of the idea
that we are only now crossing into some new terrain of lost innocence.
About
the Author: Daniel Sarewitz is the co-director of
CSPO.


By the way, if you want to see what happens to real technological systems without humans in the loop, so-to-speak, when you put them on the battlefield, read Paul Edwards's account of automated bombing in Vietnam in The Closed World. Basically, the systems screwed up close to 100% of the time.