The transition to Artificial Intelligence (AI)-enabled autonomous systems in the defense industry is considered an inevitable technological reality (Scharre, 2018). However, the development and fielding of Lethal Autonomous Weapon Systems (LAWS) that completely remove human control pose serious challenges and risks to international law and the ethical values of humanity (ICRC, 2021; Scharre, 2018). There are urgent calls within the international community for a global moratorium or the adoption of binding regulations on these systems (ICRC, 2023; Heyns, 2013)
I. The Autonomy Spectrum and Conceptual Distinction
In light of technological progress, it is argued that the focus of the debate should shift from “whether AI will be used” to the question of “how human judgment will be preserved” (Scharre, 2018). Fully autonomous systems (Human-out-of-the-loop) are stated to have the potential to trigger humanitarian crises that conflict with the slow, human-centered nature of International Humanitarian Law (IHL) (ICRC, 2021).
The critical distinction in legal and technical analyses is between automation (where a vehicle performs a function) and autonomy (where a vehicle independently makes the decision to select targets and use force) (Scharre, 2018). While Lethal Autonomous Weapon Systems (LAWS) is the official term used within the United Nations (UN) framework (UNODA, 2023), the term “Killer Robots” is also commonly used by the media and civil society organizations (Human Rights Watch, 2020). The autonomy spectrum bears the highest risks at the level of Full Autonomy (Human-out-of-the-loop) (Scharre, 2016), where human intervention is absent; these systems, once activated, select targets and apply force without human authorization, supervision, or intervention (Scharre, 2018). This marks a paradigm shift that involves transferring critical decision-making tasks, such as target selection and the application of force, from humans to computers (Scharre, 2018).
II. Compliance Issues with International Humanitarian Law (IHL)
The core purpose of IHL is to strike a balance between military necessity and humanitarian exigencies (Scharre, 2018). Experts state that the use of fully autonomous systems challenges the law’s ability to maintain this balance.
Violation of the Principle of Distinction and Algorithmic Uncertainty
The Principle of Distinction is characterized as the “cornerstone” of IHL (Scharre, 2016) and requires all those involved in armed conflict to distinguish between military objectives and civilians (Scharre, 2016). This obligation rests not on the weapon system, but on the human commander or operator who plans, decides upon, and carries out an attack (ICRC, 2019; Scharre, 2016).
However, AI systems relying on algorithmic pattern matching are noted to face significant difficulties in accurately identifying targets, especially in asymmetric warfare environments where the distinction between civilians and combatants is often blurred (Human Rights Watch, 2021). Cases such as the 2021 Kabul drone strike demonstrate the struggle of automated systems to distinguish between belligerents and unintended targets (Human Rights Watch, 2021).
Proportionality and the Need for Human Judgment
The Principle of Proportionality requires a complex moral judgment, comparing the expected military advantage with the anticipated collateral harm to civilians (Gill, 2019). It is asserted that such assessments necessitate the moral discretion that AI is not yet deemed to possess (Human Rights Committee, 2018).
The principle of military necessity permits measures that are actually necessary to achieve a legitimate military purpose and are not otherwise prohibited by IHL (Scharre, 2018). However, if LAWS inherently violate fundamental IHL principles such as distinction and accountability, then their use should be prohibited, regardless of their technological advantages (Scharre, 2018).
III. The Crisis of Humanity: Accountability and Ethical Gaps
The use of LAWS has the potential to create a deep legal “accountability gap” that increases the risk of impunity and undermines the foundation of international criminal law (ICL) (Human Rights Watch, 2015).
The Accountability Gap
International Criminal Law (ICL) focuses on holding individuals criminally responsible (Rome Statute, 1998) and historically has an anthropocentric orientation, operating on the assumption that crimes are committed by human agents capable of intent, knowledge, and control (Lieber, 1863).
When LAWS, with their capacity to select and attack targets, insert a non-human agent between human intent and the use of force, the traditional chain of responsibility is broken (Lieber, 1863). Because AI systems make decisions through opaque “black box” processes (CISA/DARPA, 2023), legal hurdles arise in holding operators and commanders criminally liable for the unpredictable actions of the machine (Human Rights Watch, 2015). Furthermore, attributing criminal intent (mens rea) for a war crime to programmers is also deemed difficult, as their activities generally take place in peacetime (Krishnan, 2020).
Impact on Human Dignity
The delegation of life-and-death decisions to machines is stated to constitute a fundamental assault on the core ethical values of humanity, diminishing both the moral agency of the users and the human dignity of those against whom force is used (ICRC, 2019). Human Dignity features in ethical, legal, and political discourse as a foundational commitment to human value (Etzioni, 2017). The protection of an individual’s basic rights is inextricably linked to the protection of their dignity (European Parliament, 2020).
The International Committee of the Red Cross (ICRC) notes that there are ethical concerns that being targeted and killed by an emotionless algorithm fundamentally denies the recognition of the victim’s individual worth and humanity (ICRC, 2021). The UN Human Rights Committee stated that “the development of autonomous weapons systems lacking in human compassion and judgment raises difficult legal and ethical questions concerning the right to life” (Human Rights Committee, 2018).
IV. Proponent Arguments and Counter-Arguments
Proponents of LAWS argue that machines could offer a more rational, precise, and consequently more ethical form of warfare by eliminating human flaws such as fatigue, anger, or cognitive limitations (Sparrow, 2007).
These advocates contend that the acceptable error rate for a machine should be compared against objective metrics of the known error rates for humans under similar conditions (Sparrow, 2007). However, counter-arguments highlight that algorithmic errors (e.g., systemic errors in pattern matching) have the potential to cause mass, unpredictable harm, and that even the claimed precision of LAWS amplifies civilian risks due to the intrinsic accountability deficit (Boulanin & Verbruggen, 2017).
Furthermore, the deployment of LAWS introduces the risk of war becoming “riskless,” as the human cost, and thus the domestic political backlash for the state initiating the conflict, is reduced (Horowitz, 2021). Analysts state that lowering the political cost reduces the threshold for initiating conflict and increases the likelihood of wars of aggression (Horowitz, 2021).
V. Geopolitical Instability and the Risk of an Arms Race
The uncontrolled development and proliferation of fully autonomous weapon systems pose a more imminent risk to international stability than more theoretical AI risks, such as superintelligence (Horowitz, 2021).
The deployment of autonomous weapons raises the specter of an AI-powered arms race (Horowitz & Scharre, 2015). The militarization of AI introduces risks to international stability stemming not so much from the technical flaws of the systems, but from the fact that AI increases the speed and ease of initiating conflict (Horowitz, 2018).
The governance of LAWS remains unstable due to differing national views. Despite multilateral discussions within the UN Convention on Certain Conventional Weapons (CCW) ongoing since 2013 (UNODA, 2023) , major military powers like Russia continue to oppose legally binding instruments for development and use (United States Department of Defense, 2016). This political resistance creates a governance gap that permits the acceleration of LAWS proliferation (Human Rights Watch, 2015).
VI. Calls and Proposals for a Regulatory Framework
Due to the slow pace of diplomatic processes, the urgency for action is increasing. The UN Secretary-General has reiterated his call for the conclusion of a legally binding instrument to prohibit and regulate autonomous weapon systems by 2026 (United Nations, 2023).
The International Committee of the Red Cross (ICRC) stresses that states must adopt new, legally binding rules that prohibit unpredictable autonomous weapons and those designed or used to apply force against persons (ICRC, 2023). These calls often propose a dual approach:
- 1. Total Prohibition: The development and use of fully autonomous weapons (Human-out-of-the-Loop) that lack meaningful human control over target selection and engagement must be fully prohibited (Human Rights Watch, 2015).
- 2.Strict Restriction: All other autonomous systems (Human-in-the-Loop and Human-on-the-Loop) must be subject to strict restrictions and transparency requirements (ICRC, 2023).
Experts state that the stance taken against this technology is fundamentally a matter of protecting the core values of IHL. Delegating the decision to kill to an emotionless algorithm represents a retreat from IHL’s decades-long efforts toward “humanizing” warfare.
It is stressed that the uncontrolled development of these systems increases the risks of an accountability gap, violation of the distinction principle, and loss of human judgment, leading to concerns about the compliance of full autonomy with international law. The priority in the diplomatic process is emphasized as concluding negotiations on a binding instrument that prohibits fully autonomous systems and imposes strict restrictions on other autonomous systems (ICRC, 2021).
References
- Asaro, P. M. (2012). On the Moral Status of Military Robots. International Review of the Red Cross.
- Boulanin, V., & Verbruggen, M. (2017). Exploring the Implications of Autonomy and Artificial Intelligence in Weapon Systems. Stockholm International Peace Research Institute (SIPRI).
- CISA/DARPA. (2023). Software Understanding Gap (SUG) Report. Cybersecurity and Infrastructure Security Agency.
- Cummings, M. L. (2017). Artificial Intelligence and the Future of Warfare. U.S. Naval Institute.
- Dennett, D. C. (2017). From Bacteria to Bach and Back: The Evolution of Minds. W. W. Norton & Company.
- Etzioni, A. (2017). Privacy in a Cyber Age: Policy and Practice. Palgrave Macmillan.
- European Parliament. (2020). Ethical aspects of Artificial Intelligence, robotics and related technologies.
- Gilli, A., & Gilli, M. (2018). The Diffusion of AI in the Military: A Case of Bipolar Convergence?. International Security.
- Gill, T. D. (2019). Autonomous Weapon Systems and the Principle of Proportionality. Journal of Conflict & Security Law.
- Heyns, C. (2013). Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions. UN Human Rights Council.
- Horowitz, M. C. (2018). The Promise and Peril of Military Applications of Artificial Intelligence. Belfer Center for Science and International Affairs.
- Horowitz, M. C. (2021). Artificial Intelligence and the Future of Geopolitics. Council on Foreign Relations.
- Horowitz, M. C., & Scharre, P. (2015). An Arms Race in Autonomous Weapons?. Center for a New American Security (CNAS).
- Human Rights Committee. (2018). General comment No. 36 on article 6 of the International Covenant on Civil and Political Rights, on the right to life. UN Doc. CCPR/C/GC/36.
- Human Rights Watch. (2015). Mind the Gap: Lack of Accountability for Killer Robots. HRW.
- Human Rights Watch. (2020). Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons. HRW.
- Human Rights Watch. (2021). A New Treaty for Killer Robots: Why it’s Time to Ban Fully Autonomous Weapons. HRW.
- International Committee of the Red Cross (ICRC). (2019). Autonomy, Artificial Intelligence and Robotics: Technical Aspects of Human Control. ICRC.
- International Committee of the Red Cross (ICRC). (2021). New technologies of warfare: Autonomous weapon systems. ICRC.
- International Committee of the Red Cross (ICRC). (2023). Autonomous Weapon Systems: New and Updated ICRC Position. ICRC.
- Krishnan, A. (2020). Killer Apps: The Dark Side of AI. The MIT Press.
- Lieber, F. (1863). General Orders No. 100: The Lieber Code. U.S. War Department.
- Roff, H. M. (2019). The Geopolitics of Artificial Intelligence. The National Interest.
- Roff, H. M., & Dote, A. (2021). The Case for a Ban on Killer Robots. The Bulletin of the Atomic Scientists.
- Rome Statute of the International Criminal Court. (1998). United Nations Diplomatic Conference of Plenipotentiaries on the Establishment of an International Criminal Court.
- Russell, S., & Norvig, P. (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Pearson Education.
- Scharre, P. (2016). Autonomous Weapons and the Laws of War. Center for a New American Security (CNAS).
- Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. W. W. Norton & Company.
- Scharre, P. (2018). The New Killer Apps: The Race to Develop AI Weapons. Foreign Affairs.
- Sharkey, N. (2007). Glamorising the Kill: The Moral and Ethical Implications of Autonomous Robo-soldiers. Journal of Military Ethics.
- Sharkey, N. (2012). Killing Made Easy: Ethics and the Design of Autonomous Weapons. IEEE Technology and Society Magazine.
- Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy.
- United Nations. (2023). Secretary-General’s remarks at the Security Council open debate on artificial intelligence. UN.
- United Nations Office for Disarmament Affairs (UNODA). (2023). Lethal Autonomous Weapons Systems (LAWS). UNODA.
- United States Department of Defense. (2016). Department of Defense Directive 3000.09: Autonomy in Weapon Systems.





