• This House Supports a Ban on the Development of Autonomous AI-Controlled Weapons

    If you are interested in speaking for the Motion or moderating this debate, please contact event organizer Deborah through this website. Is it ethical to allow autonomous AI-controlled weapons to independently decide when to take a human life? Experts convening at last month’s annual meeting of the American Association for the Advancement of Science had a clear answer: a resounding no. Instead, they and many others are calling for a ban on the development of autonomous AI-controlled weapons – that is, weapons that, once activated, would be able to select and attack targets without ongoing human intervention. In the words of an EU resolution, such a ban would include "a preventive ban on research into defense products and technologies that are specifically designed to carry out lethal strikes without human control over engagement decisions.” Supporters of a ban argue that the use of autonomous AI-weapons without meaningful and effective human control would undermine the right to life and create an accountability gap if, once deployed, they are able to make their own determinations about the use of lethal force. “We are not talking about walking, talking terminator robots that are about to take over the world; what we are concerned about is much more imminent: conventional weapons systems with autonomy,” says Mary Wareham of Human Right’s Watch. Adds professor Peter Asaro: “The delegation of authority to kill to a machine is not justified and a violation of human rights because machines are not moral agents and so cannot be responsible for making decisions of life and death.” But not everybody is on board to fully denounce the development and possible use of AI-controlled weapon systems. Several countries are opposed to an outright ban on the development of such weapons. Opponents of the ban argue that these weapons could actually reduce violations of humanitarian law in warfare. Unlike humans, robots cannot fear, be hungry or tired, driven by hate and revenge, or be too quick on the trigger when overwhelmed by uncertainty and a sense of threat in the heat of battle. AI-controlled weapons could delay the use of force until the last, most appropriate moment, when it has been established that the target and the attack are legitimate. Unlike current missile systems, such weapons could be programmed to cancel an attack at the last moment based on changing circumstances. They could even be programmed to refuse orders that violate international humanitarian law or to deactivate if they fall into the wrong hands. But these possibilities could never be explored and tested under a ban on the development of autonomous AI-controlled weapons. So what do you think? Should the development of autonomous AI-controlled weapons be preemptively banned given the threat they pose to human rights, international law and civilian safety? Or do these weapons actually have the potential to make warfare both less feasible and less lethal – a potential that can only be explored without a ban? Join us at the next SFDebate to explore and debate these and other questions. Note there is a $5 charge to attend to help defray costs. Read more: https://www.amnesty.org/en/latest/news/2019/01/public-opposition-to-killer-robots-grows-while-states-continue-to-drag-their-feet/ https://en.wikipedia.org/wiki/Lethal_autonomous_weapon https://www.lawfareblog.com/too-early-ban-us-and-uk-positions-lethal-autonomous-weapons-systems https://www.hrw.org/report/2018/08/21/heed-call/moral-and-legal-imperative-ban-killer-robots https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/ https://www.aaas.org/news/killer-robots-pose-grave-threats-civilian-safety-and-ethical-norms https://digital-commons.usnwc.edu/ils/vol90/iss1/1/