Experts worldwide discuss autonomous weapons systems regarding their implications for the international law of war, international security and arms control. Drones are already being used in armed conflicts. With the rapid development of autonomous weapon systems and artificial intelligence, are we on the threshold of a completely new era of warfare?
If war-relevant digital systems are equipped with AI elements, machines can basically decide whether to kill people or even start a war. Digital technologies not only pose ethical challenges to societies (e.g. surveillance, disinformation, equal access), they also affect the warfare of the future and may jeopardise arms control and disarmament.
The role of humans
For a successful debate at the level of science and international law, it is crucial to first agree on a common vocabulary and ethical principles. In the case of autonomous weapon systems, the central question is the role of humans as decisionmakers in armed conflicts. Fully autonomous weapon systems would function entirely without human supervision or control. Already now, individual systems operate semi-autonomously, for example distance-active protection systems of tanks. The use of lethal autonomous weapon systems not only raises questions of ethical responsibility, but also endangers stability and security at a global level – increased crisis instability, expensive arms races and a lowered deployment threshold could be the consequences. To ensure that AI algorithms reliably recognise non-combatants, the development of so-called “explainable” AI is crucial, making the AI’s decision-making processes comprehensible to humans. This should also prevent the AI from targeting marginalised groups, for example, due to insufficient training data.
Social debate
The questions of what role robots and algorithms should play in armed conflicts are far from settled. On the contrary, topics such as artificial intelligence are only gradually moving into the public consciousness. Individual technological breakthroughs such as the development of ChatGPT can shed light on the dramatic technological change. Nevertheless, they also always obscure part of the underlying professional debate. Therefore, the societal consequences of digitalisation and automation need to be addressed and discussed much more. For experts to be able to make meaningful decisions at all, there needs to be a broad social exchange and consensus-building across individual disciplinary boundaries.
Text Hannes Vogel