In recent years, artificial intelligence (AI) has revolutionised numerous sectors, including the military sector. AI is used for a variety of purposes in the military sector including autonomous drones performing reconnaissance missions or targeted attacks, and automated defense systems detecting and neutralising threats without human intervention. AI applications in this area range from surveillance to logistics, even using fully autonomous weapons on the battlefield. The introduction of fully autonomous systems, also known as lethal autonomous weapon systems (LAWS) or more simply killer robots, gives rise to unprecedented moral dilemmas. In his book ‘Army of None’ (Scharre, 2018), Paul Scharre defines autonomy with reference to three elements: the tasks performed, the relationship between the system and the user, and the complexity of the system’s decision-making process. These aspects are crucial to understanding the ethical challenges posed by AI in the military sector (Limata, 2023).
Killer robots are capable of identifying, selecting, and attacking targets without any direct human intervention, a task that is difficult also for soldiers. They must distinguish between military personnel and civilians, medical personnel and combatants. They must distinguish between civilians who support the enemy and those who do not. If this is difficult for a human being, it is even more difficult for an AI system. These systems run on sensors and algorithms i.e. sequences of mathematical instructions that turn real data into binary codes that computers can understand. As they are based on mathematical models, autonomous systems cannot understand intentions behind human behaviours, especially in dynamic and changing contexts like war. This is easily understood if we consider the different plausible interpretations for a simple gesture such as raising one’s hands. It can be perceived as a sign of surrender, an attempt to attract attention, or even an attack. The ability to make such distinctions is so sophisticated that it is difficult to translate into mathematical models. It requires deep understanding of context, culture and human intentions.

Another important issue is the role of human beings in life-and-death decisions made by these systems. Killer robots are referred to as out-of-the-loop systems because they autonomously decide to engage and neutralise identified targets without the possibility of human intervention. This raises questions about adherence to the basic principles of international humanitarian law. One of these basic principles is the “proportionality of the attack” — the expected military advantage should be balanced against the expected collateral damage to civilians and civilian property. This principle may seem straightforward, but applying it involves three distinct processes: distinguishing between legitimate and non-legitimate targets, assessing the military attack in terms of concrete and direct advantage, and assessing collateral damage and civilian casualties. A military attack is therefore proportional when the target is legitimate and when the collateral damage and civilian casualties are not excessive compared to the expected military advantage. Humanitarian principles use terms such as “excessive” which do not translate easily into mathematical models. For that reason, compliance with humanitarian principles, such as proportionality, requires an assessment that is deeply anchored in the context in which the decisions are being made. It requires a sophisticated human assessment and decision-making process.
Another source of ethical issues is related to the complexity of these systems’ decision-making processes. Killer robots operate on predetermined algorithms and sensory data that allow the system to perform its tasks completely autonomously. In a war context where situations change rapidly and human life is at stake, the ability to adhere to humanitarian principles and distinguish between legitimate and non-legitimate targets requires sophisticated and contextual understanding. This process of assessment is not just a matter of technological precision but also human sensitivity to the consequences of military decisions. Moreover, these systems are designed outside the battlefield and, as such, may not be sufficiently flexible or adaptable to the variables and uncertainties that are typical of real-world military operations. This could lead to erroneous decisions with disastrous consequences for human lives and for military missions.
The autonomous nature of killer robots is made possible by AI, and it presents all kinds of ethical issues. However, it is important to consider that these systems aim to save soldiers’ lives by allowing them to follow operations remotely instead of going into the battlefield. As a result, we are witnessing a transformation of the very concept of military ethics, from the ethics of combat to the ethics of execution (Chamayou, 2014). The soldier is no longer directly involved in combat but is tasked with monitoring and managing machines that perform lethal actions. Using these systems leads to dehumanising the act of killing. It is not the soldier who kills but a machine that has been programmed to do so. This dehumanisation has both positive and negative aspects. Many soldiers return from the battlefield with psychological problems due to the nature of war and combat. Post-traumatic stress disorders and drug addictions are common among returned soldiers. These problems are also the result of seeing “the other”, “the enemy”, die right in front of them. By removing the soldier from the battlefield, physical and psychological distance from the enemy is created, which is beneficial for soldiers’ mental health but problematic for accountability and transparency in lethal decisions. When a machine makes life-and-death decisions, who is responsible? If an autonomous weapon makes a mistake and kills civilians, who is to blame? These questions, still unanswered, pose complex legal and moral challenges. The absence of a human figure directly involved in the decision-making process could make it difficult to assign responsibility for war crimes, complicating the application of international humanitarian law.

In addition, the autonomous nature of AI-based weapons systems could incentivise nations to engage in a technological arms race, increasing the risk of large-scale automated conflicts that reflect a shift from the ethics of combat to the colder, more detached ethics of executing military operations. The speed and efficiency of such systems could lower the barriers to using military force, making it easier for states to go to war without fully considering the human and diplomatic consequences involved.
In summary, artificial intelligence has the potential to radically transform the battlefield. It can offer significant advantages for the safety of soldiers and operational efficiency. However, the use of fully autonomous weapons systems such as killer robots raises serious ethical and legal issues that require detailed consideration. It is crucial that the international community establishes strict regulations to ensure that these technologies are used responsibly and humanely. The introduction of autonomous weapons could not only change the nature of conflicts but also affect global stability and peace. For this reason, it is essential to promote thorough ethical debate and create appropriate legal frameworks to regulate the development and use of these technologies. Only with transparent oversight and strict regulation can we ensure that artificial intelligence is an ally rather than a threat to global security. The challenge is great, but with international cooperation and a shared ethical commitment, it is possible to steer technological development towards a safer future for all.
Bibliography
Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. New York: Norton.
Chamayou, G. (2014). Teoria del drone. Principi filosofici del diritto di uccidere (A theory of the drone. Philosophical principles of the right to kill) (transl. M. Tari). Rome: DeriveApprodi.
Limata, T. (2023). Decision-making in killer robots is not bias free. Journal of Military Ethics, 22(2), 118-128.

