Science fiction writers have known for decades that this day would finally arrive. The militarization of robotics has been increasing at a quickening pace. Artificial intelligence is able to think, reason, react, and act decisively to seek and destroy based on pre-programmed parameters.
“Robots that can decide where to kill, who to kill and when to kill is high on all the military agendas,” Professor Sharkey said at a meeting in London.
[From BBC NEWS | Technology | Call for debate on killer robots]
According to Noel Sharkey, a University of Sheffield professor of artificial intelligence and robotics, the main problems are that these military “drones” have trouble with “friend vs. foe” distinctions, and they can’t deal with the concept of “proportionality”, the determination of the amount of force that is prudent and necessary to gain the required military advantage. Until recently, these issues have not been on the radar of nations with this military capability. The result was a certain amount of collateral damage. Think Pakistan and the U.S. military drones.
Some scientists have suggested that Isaac Asimov’s Three Laws of Robotics be adapted from his writings to the current realities on Earth.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course, these laws preclude most military applications. Wartime and anti-insurgent use would have to be seriously curtailed and reworked to make these laws fit. Perhaps a focus on military intelligence? Or might this still contravene the first and second laws by aiding and abetting humans bent on the kill???
I guess the alternative is to change nothing in how we approach the militarization of artificial intelligence… then perhaps Battlestar Galactica and the Cylons will not be that far off.