Chinese robot dog carrying a weapon.

Weaponizing AI: The moral dilemma of lethal autonomous robots

As militaries embrace robotic killers, the debate over their ethical use heats up.

“Technology is neither good nor bad, nor is it neutral.” This is perhaps the most famous of the six laws proposed by American historian Melvin Kranzberg. In 1986, he articulated a concept that is challenging to grasp even today—the relationship between technology and its broader impacts. Kranzberg emphasized that technological development often has far-reaching environmental, social, and human consequences that extend beyond its immediate goals. In other words, technology can yield entirely different results when applied in various contexts. For instance, while robots can help humanity by alleviating oppressive daily tasks, like loading dirty dishes into a dishwasher, they can also be used in warfare, where they can either protect or harm on a large scale. The value of robots always depends on the context in which they are deployed.
1 View gallery
 רובוט כלב סיני robot dog
 רובוט כלב סיני robot dog
Chinese robot dog carrying a weapon.
(Screenshot: Youtube)
This was vividly demonstrated during the "Golden Dragon 2024" exercise in May when the Chinese military unveiled its new killer robot to the world. The robot, resembling a mechanical dog skeleton, was equipped with an automatic weapon and a small guiding camera on its front. In the video released by the Chinese army, the robot autonomously leads a unit of soldiers to a building, seemingly without human guidance. The Chinese military is not alone in this endeavor. As early as 2020, the US Air Force showcased its use of robotic dogs as part of its advanced combat management system, a product of the autonomous weapons development program, which the U.S. funded with $18 billion over four years. The Chinese robotic dog was developed by Unitree Robotics, a company that had previously signed an open letter in October 2022, alongside other leading firms like Boston Dynamics and Agility. In that letter, they pledged not to weaponize their products following an exhibition by Ghost Robotics, which revealed an autonomous armed dog robot. "We believe that adding weapons to remotely operated or autonomous robots that can navigate to previously inaccessible areas, where people live and work, raises new risks of harm and serious ethical issues," the companies stated, committing "not to add weapon technology" themselves or "support others in doing so." A year and a half later, Unitree has gone against this promise.
Over the past 20 years, numerous non-profit organizations have emerged to combat what they term "killer robots," officially known as Lethal Autonomous Weapons Systems (LAWS). These groups have long warned that robots capable of making autonomous decisions to select and destroy targets would further dehumanize warfare, equipping combatants with weapons that cause greater damage at lower costs, ultimately exacerbating conflicts. "The real question is not about the two extremes—full autonomy or no autonomy—but rather what degree of autonomy we are willing to grant machines, as we already give them some level of autonomy," says Dr. Oren Etzioni, founding CEO of the Allen Institute for Artificial Intelligence. "We’re just haggling over the price—what degree of autonomy we’re willing to give the machine."
Currently, robots integrated into battlefields mostly operate with a human in the decision-making loop. However, militaries and private companies are working to develop fully autonomous systems by combining artificial intelligence and robotics. Countries such as Russia, China, Australia, Israel, and many others are advancing in this area, either by developing new weapons or by integrating civilian-developed systems into existing military tools. "This is the Oppenheimer moment of our generation," said Austrian Foreign Minister Alexander Schallenberg at a major conference on autonomous weapons held in April in Vienna, referencing Robert Oppenheimer, the father of the atomic bomb. Schallenberg and many others view this moment as a moral decision point regarding the third revolution in warfare—following gunpowder and nuclear arms—the autonomous weapon.
Despite their lethal potential, robots with the ability to kill also have advantages. "In defensive situations, they can save lives," explains Etzioni, suggesting that autonomous robots could be used for tasks like clearing mines. Some argue that using autonomous robots might even be a more ethical approach to warfare, as robots do not fight for their own survival and thus are not motivated by fear, prejudice, or hatred, potentially avoiding unnecessary killing.
Opposition to these types of weapons generally falls into two categories: technological limitations and legal and ethical objections. On one hand, a robot with the power to decide who lives and who dies may struggle with the practical distinctions required by the laws of war, such as proportionality, necessity, and distinction. There is also the question of responsibility—if a robot commits a war crime, who is held accountable? The robot, the soldier who deployed it, the commanding officer, or the corporation that created it?
Dr. Tal Mimran, an expert in international law from the Hebrew University, highlights that the problem is not just that artificial intelligence makes mistakes, but that when it does, humans may not understand why the decision was made. The algorithm guiding the robot or system is often a "black box." "The IDF claims that their AI system, ‘Habsora,’ which generates targets, provides an output that shows the intelligence researcher what information was used," Mimran notes. "But we know from parallel systems—for example, the generalization system in Israel—when the police tried to explain its operation in a constitutional committee, they could not do so."
There is overwhelming agreement that this issue must be addressed. However, as time passes, the ability to effectively manage the issue diminishes. A total ban, as advocated by many in academia, seems impractical—countries like the US, China, the UK, and Russia argue that a ban is too extreme and premature. What remains is regulation. "A complete stop or cessation is not really feasible," Etzioni concludes, "but we must work towards better defining the laws of war in relation to this issue."
First published: 08:36, 26.08.24