‘Killer Robots,’ AI & The Future Of Warfare

Killer-Robots-Madras-Courier
Representational Image: Public domain
Mass-producing autonomous weapons could trigger “flash wars,” rapidly escalate conflicts and, exacerbate the unpredictability of AI weapons systems.

Mohsen Fakhrizadeh lived in the shadows. He was a nondescript professor of nuclear physics at Imam Hussein University in Tehran. He was dour and humourless, with almost no public appearances, either in academia or the media. However, in the West, his name consistently featured in highly classified documents of the International Atomic Energy Agency. Every confidential file related to Iran’s nuclear activities — such as project AMAD,Iran’s nuclear program, S.P.N.D (the Organisation of Defence Innovation and Research), and the Green Salt Project, which was associated with uranium production and enrichment — bore his signature.

For the CIA, he was an enigmatic figure. Untraceable and unreachable, he was beyond the confines of being recruited as an intelligence asset. And within the boardrooms of Mossad, he was public enemy number one, who was long regarded as the chief architect of the covert Iranian nuclear weapons program. So, for the Israeli regime, signing Fakhrizadeh’s death warrant was an inevitable outcome.

On November 27, 2020, at the peak of coronavirus lockdowns and deserted roads, Dr Mohsen Fakhrizadeh, behind the wheel on the ritualistic route to his weekend villa in Absard, was assassinated by a satellite-controlled firearm attached to state-of-the-art robotics. Rituals are the best weapons for intelligence.

The gun was hidden in an abandoned Nissan pickup truck loaded with wooden logs. Moments later, the vehicle exploded in self-destructive mode. His wife, sitting ten inches away on the passenger seat, exited unscathed. No on-ground assassins were reported at the scene of the crime. Despite being surrounded by a convoy of three vehicles and eleven commandos, the most heavily guarded man in Iran was dead. The sensational assassination was one of the most technically complex and high-risk operations ever conducted by the Israeli spy agency. In the history of targeted killings, it heralded the dawn of remotely activated semi-autonomous weapons, advanced sensor detection, satellite connectivity and algorithmic profiling for pinpoint precision.

The AI was trained on large datasets, which helped capture a wide range of nuanced details, including the movement and speed of Fakhrizadeh’s car, as well as the relative speeds and distances of the accompanying convoys. Furthermore, the AI accounted for the 1.6-second signal delay in satellite communication between the Belgian-made FN-MAG machine gun and the clandestine command centre thousands of miles away in Israel. Advanced facial recognition technology enabled precise identification of the target. AI even calculated the weapon’s recoil time and integrated these variables to achieve flawless execution. Through the seamless combination of metal components, mathematical algorithms, and cutting-edge electronics, the system created a lethal weapon that operated with such precision and efficiency, leaving no evidentiary trail.

The development of Lethal Autonomous Weapons Systems (LAWS), also known as killer robots, marks a shift from semi-autonomous weapons to fully AI-driven systems capable of identifying and killing human targets without human intervention. Unlike remotely operated drones, LAWS function independently and can appear as armed drones, vehicles, missile systems, or sentry turrets. They use sensors such as facial recognition and radar, along with machine and deep learning algorithms, to locate and eliminate designated targets in real-world environments.

The quantum advances in autonomous weapons were enabled by technological advances in hardware, from infrared, sonar, and electro-optical systems to synthetic aperture radar, artificial intelligence, and robotics. Their wide-ranging applications span the gamut of intelligence, surveillance and reconnaissance (ISR), navigation and detection, to more benign conflict forecasting, wargaming, and supply chain logistics. When autonomous weapons systems (AWS) and AI-enabled decision support systems (AI-DSS) encroach into the perilous territory of targeted killings, it raises profound ethical, legal and humanitarian concerns. 

The UN Secretary-General Antonio Guterres has exhorted a prohibition on lethal autonomous weapons by international law, describing them as “politically unacceptable” and “morally repugnant”. He vehemently emphasised that “we cannot delegate life-or-death decisions to machines.”  

Autonomous weapons have made their mark in the Russia-Ukraine War and in the Israeli offensive against Palestinians post Hamas’ October 7th massacre. Long-range drones have been deployed by the Ukrainian military targeting Russian oil refineries to decimate its financial infrastructure in funding the invasion. On March 13, 2024, the Rosneft-owned Ryazan refinery was hit multiple times with precision-guided drones. In the same month, another drone struck Niznekamsk oil refinery, incidentally one of the five largest in Russia, located 1100 kilometres from the border in the Tatarstan region. These drones have advanced capabilities to circumvent jammers aided by artificial intelligence. 

Experts refer to the versatility of these drones as “machine vision”: their chips use machine learning to identify terrain and execute precise, pre-programmed strikes. They also operate autonomously, independent of satellite communication, making them more resilient and effective in contested environments.

Ukraine has made significant strides in leveraging autonomous weapon technologies to enhance its military capabilities. Notably, the Ukrainian forces have effectively utilised loitering missiles such as the Switchblade 300and Switchblade 600, manufactured by Arlington-based AeroVironment (AV). These systems are distinguished by their advanced aerial reconnaissance and navigation features, enabling precise targeting and situational awareness on the battlefield.

In addition to loitering munitions, Ukraine has widely deployed the Turkish-made Bayraktar TB2 drone. Recognised as a versatile multipurpose platform, the Bayraktar TB2 boasts an onboard laser-guided sensor for accurate target acquisition and is equipped with a payload of four smart munitions. This unmanned aerial defence system excels in intelligence, surveillance, and reconnaissance (ISR) operations and supports armed attack missions. The Bayraktar TB2 has played a critical role in several recent conflicts, including operations against Kurdish and Syrian forces in the Middle East. Furthermore, it was employed by the Azerbaijani military during the 2020 Nagorno-Karabakh War to target Russian-made air defence systems and tanks.

Conversely, Russia has advanced its deployment of autonomous weapons through the use of the KUB-BLAloitering munition. Jointly developed by Zala Aero, a subsidiary of the Kalashnikov Group, the KUB-BLA incorporates sophisticated Artificial Intelligence Visual Identification (AIVI) technology. This AI-driven system enables real-time identification and classification of targets, enhancing the effectiveness of Russian operations in the ongoing conflict with Ukraine. As a kamikaze drone, the KUB-BLA is primarily used to destroy small targets on both land and sea.

Yossi Sariel, the erstwhile commander of the elite Unit 8200 of the Israeli Defence Forces (IDF), had published a book in 2021 titled “The Human Machine Team”. Within its pages, Sariel advocates the creation of a “special machine” capable of circumventing the time constraints imposed by analysing massive volumes of data during active conflict. His vision centres on developing systems that can swiftly process information to generate thousands of potential targets in real time amid ongoing warfare.

In reality, such weapons exist in the Israeli arsenal, which were mercilessly deployed to kill innocent Palestinian civilians, wipe out their homes and livelihood in one of the gruesome and remorseless genocides of the 21st century. One prominent example is the weapon system codenamed “Lavender,” a fully autonomous platform introduced in the early days of Israel’s retaliatory military offensive. Lavender is specifically designed to identify and mark all alleged terrorist operatives belonging to Hamas and Palestinian Islamic Jihad (PIJ) within the Gaza Strip. 

Using advanced AI targeting, Lavender compiled a list of 37,000 Palestinian operatives. Based on AI targeting, Israel embarked on a murderous rampage, mostly killing individuals when they were in their homes with families and while disengaged from any military activity. Despite operating with only a 90% confidence interval in its identification process, the system received authorisation to execute lethal strikes. The result was a murderous bombardment of Gaza, pulverising residential homes, hospitals, and civilian neighbourhoods, a slew of assassinations and butchering thousands of innocent civilians, including women and children, dissociated with Hamas.

This massacre, justified under the pretext of self-defence, has been criticised as a flagrant violation of International Humanitarian Law (IHL), specifically the Friedrich Martens Clause. The deployment of such autonomous systems supplanted human judgment in determining who should live and who should die, raising grave ethical and legal concerns. Amoral machines and morally repugnant dehumanisation reduced human lives to mere data points.   

Other automated systems, such as “Where’s Daddy?”, were deployed to geolocate mobile phones of targeted individuals and eliminate them through airstrikes when they entered locations, mostly homes. Israel introduced another system, “The Gospel,” tasked with compiling lists of non-human targets – military infrastructure, tunnels, family homes – to be bombed where authorities believed terrorists were hiding. If Lavender was focused on targeting people, Gospel extended this targeting to physical structures, further expanding the scope of autonomous targeting technologies in modern warfare.

The use of autonomous weapons in modern warfare raises concerns about compliance with international human rights law, particularly regarding the right to life and decisions around launching or aborting attacks. Palmer Luckey, the founder of Anduril, a Silicon Valley defence company that manufactures lethal autonomous attack drones, unmanned fighter jets, and submersibles, has secured defence contracts with the Pentagon. He argues that AI systems can better distinguish targets than conventional landmines, which cannot, for example, differentiate between a school bus and a Russian armoured military vehicle. 

Palmer Luckey argues that major US defence contractors—Boeing, Northrop Grumman, Raytheon Technologies, General Dynamics, and Lockheed Martin—use outdated and overpriced business models that are unsustainable in the face of the growing demands of future warfare. He claims these firms drain taxpayer money by assuming fewer risks in the value chain and relying on DoD funding, unlike companies like Anduril, which invest in their own R&D and manufacturing.

His goal is to make the US the world’s exclusive gun store with minimal physical footprint of its soldiers during international missions. He rejects installing a pilot in every combat aircraft, instead supporting the use of 100 autonomous jets remotely controlled by a single pilot. Reducing US casualties and countering Russia and China in autonomous warfare technologies and products are priorities. He views autonomous weapons as the language of deterrence that could foster global peace. His company’s mission statement is to “save western civilisation,” challenging the narrative that this differs from “human civilisation.”

The irony of such a bigoted outlook is evident in the way it diminishes the value of lives other than those of US citizens, subordinating them to the country’s hegemonic geopolitical objectives and myopic foreign policies. This perspective raises critical questions: Is there a tacit acceptance of collateral damage and the mass killings of non-US soldiers and civilians? The ethical and moral quandaries associated with autonomous weapons remain profoundly troubling and are likely to persist without resolution.

A key concern with AI-driven weapon systems is their ability to distinguish between peaceful and hostile gatherings accurately, as errors could endanger civilians exercising their right to assemble. Deploying these systems requires extensive mass surveillance, posing significant privacy risks by potentially collecting vast amounts of personal data.

Another pressing issue is the potential for inherent algorithmic biases to be introduced during the programming and training of these models. Choices made by developers – whether related to nationality, race, gender, or even dress code – could inadvertently influence the behaviour and decisions of autonomous systems when deployed in the field. The risk of these biases affecting discriminatory outcomes is a critical concern for the ethical deployment of such technologies. Legal challenges also arise regarding the “accountability gap” faced by programmers and developers who code for the opaque, “black box” determinations made by autonomous weapon systems.

The potential for autonomous weapons to be mass-produced at scale with substantial cost efficiencies could trigger a rapid escalation of conflicts and “flash wars”, exacerbating the unpredictability of AI weapons systems. The widespread adoption of AI weapons could impact standard non-military AI research through travel restrictions, publication censorship, and security clearances. It would be reminiscent of the Cold War’s impact on nuclear physics and rocketry, and on academic freedom. The resulting environment may stifle innovation and hinder the open exchange of ideas within the broader AI research community. An even greater threat is the possibility of transmission of this technology into the hands of rogue non-state actors and terrorist groups, which could jeopardise national sovereignty and global security. 

As the world enters another “Oppenheimer Moment,” concerted efforts and multilateral treaties are warranted to prevent the design, development, and deployment of autonomous weapons systems bereft of human intervention. The autonomous AI arms race has begun. The unpredictability of these lethal weapons needs to be juxtaposed with the predictability of the human penchant for self-destruction. This lose-lose combination sets the stage for mutually assured destruction (MAD) with no human agency, zero accountability and subversion of the right to remedy. Ruefully, it will be the defining paradigm of autonomous weapons and future warfare.

—30—

Copyright©Madras Courier, All Rights Reserved. You may share using our article tools. Please don't cut articles from madrascourier.com and redistribute by email, post to the web, mobile phone or social media.
Please send in your feed back and comments to [email protected]

0 replies on “‘Killer Robots,’ AI & The Future Of Warfare”