Robot Wars

Proliferation may occur when the militaries developing autonomous weapons assume that they will be able to contain and control the use of autonomous weapons. 

At a meeting in Geneva Dec. 13-17, 2021, The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons systems - Terminator-type killer robots - and failed to place restrictive controls on the development of such lethal weaponry. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people.

Proliferation may occur when the militaries developing autonomous weapons assume that they will be able to contain and control the use of autonomous weapons. But if the history of weapons technology has taught the world anything, it's this: Weapons spread. Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the Kalashnikov assault rifle: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. "Kalashnikov" autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists.

Already we witness frequent misidentifications by individual human operators of drone attacks such as the recent U.S. drone strike in Afghanistan.  When selecting a target, will weaponized Artificial Intelligence weapons be able to distinguish between hostile soldiers and children playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat? We already have an example of image recognition software used by Google identifying Black people as actual gorillas. AI systems err and when they err, their makers often don't know why they did and, therefore, don't know how to correct them. 

Lastly, how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier supposedly at the touch-pad? The soldier's commanders issuing the instructions? The corporation that manufactured the soft and hardware of the weapon?