Top News
|Mossad and Israeli Special Forces infiltrate Iran for a secret operation Details of the daring ground operation yet unknown | Kuwaiti defence forces mistakenly shoot down three USAF F 15E Strike Eagle aircraft | All Six Pilots Parachute Safely and are in hospitals for Checks | F 15E is a powerful warjet, has two pilots, one to Fly and the other as a Weapons Officer | Iran meanwhile has widened its missile strikes whiie the US Air Force and Navy have intensified Bombing of Iran | US Def Sec Hegseth says There Are No Timelines BUT IRAN WILL NOT HAVE NUCLEAR CAPABILITY | And that ‘War Will Not Be Endless’ | Trump asks Iranians to ‘Rise up and Take Over Your Government’ | Iran says No Negotiations With US | Trump Confirms Iran’s Supreme Islamic Leader ‘evil’ Ali Khamenei killed in targeted missile strikes | Many Iranian military and Islamic leaders also dead | US and Israel launched the biggest ever military strikes in history to decimate Iran’s top leadership | There are no reported of boots on ground | About 200 USAF and Navy jets are hammering Iran’s political and military targets without stop | The missiles are precision | Care is taken to avoid cities and civilians | It’s War | US and Israel attack Iran | Trump says Will Not Allow to Have Nuclear Bomb | Iran Retaliates with Missiles across Gulf and Jordan | But Not Oman | US Bases in UAE, Saudi Arabia, Bahrain, Qatar, Kuwait Hit | Trump asks Iranians to Remove Evil Regime and Take Over | Trump says US Will Annihilate Iranian Navy | Free Exchange of Missiles and Drones | Iranian Foreign Minister Calls For Stopping Attacks and Then Talks | Iran under Internet Blockade | UK PM says Our Planes in Sky for Defensive Operations | Terror Anywhere Threatens Peace Everywhere: PM Narendra Modi while Addressing Israeli Knesset | PM Modi in Israel, Prime Minister Netanyahu receives him with warm hugs | India clears Grand Mother of All Defence Deals Ever - For $ 40 Billion | Dassault Rafales, Airbus Helicopters, Boeing P 8I, Sikorsky MH 60R, Newer Technology Weapons and Drones Cleared | Modi, Macron announce India-France Strategic Partnership And India-France Year of Innovation | India Finally Decides to Buy 114 Rafale Fighters | Big, Bold Decision by Modi Government | Virtual paralysis in MoD Ends, 35 Years After VP Singh’s Lies Over Bofors | Prime Minister Modi Initiates Many Other Reforms on Defence | Congratulations Mr Modi | Nuclear Talks with US a Good Start, says Iran | Oman will continue to Mediate and host the Talks | India’s Agni-3 Nuclear IRBM Test Successful | India, US Trade Pact May be signed in March, says Commerce and Industry Minister Piyush Goyal | US asks Russia and China for a Fresh Nuclear Treaty | India’s Oil Imports from Russia lowest at $2.7 billion in 38 months | RIL buys 2 million barrels of Venezuelan Oil | India fully paid the Committed $120 million to Iran for Chabahar Port development | Project though is uncertain due to US pressure | Police cannot arrest an accused simply to Ask Questions, rules the Supreme Court of India | Adani Defence, Leonardo Aerospace in partnership to make advanced Helicopters in India | Leonardo Aerospace collaborates with Adani Defence to manufacture advanced Helicopters in India | The two companies announced an MoU to set up a ‘fully integrated Helicopter Manufacturing Ecosystem in India’ | ‘Any Attack Will be an All Out War Against Us,’ says Iran | India votes Against a Human Rights motion Censuring Iran in UN | Indian Woman Preeti Unhale Lives for 25 Years With Donor Heart ♥
DEFENCE INDUSTRYFOREIGN AFFAIRSTECHNOLOGY

AI in conflict: Keeping humanity in control

By R Anil Kumar

It is already here, shaping conflicts and challenging the rules of war. Without effective regulation, AI risks eroding humanitarian protection and destabilising peace. With proper governance, it could be harnessed to strengthen compliance with the laws of war and reduce human suffering

New York. October 8, 2025

Image courtesy: UN

A debate moves from fiction to reality

For decades, popular culture has imagined the perils of AI driven machines influencing decisions of war. From Colossus: The Forbin Project (1970) which portrayed a supercomputer seizing nuclear command, to the Terminator series (from 1984) that gave the world “Skynet”, a defence system that becomes self-aware and exterminates humanity.

At the eightieth session of the United Nations General Assembly (UNGA80), such scenarios no longer seemed far removed from fiction.

Artificial intelligence (AI) is already present in today’s conflicts, and world leaders used this year’s High-Level Week to debate how to keep the rules of war intact in the face of rapid technological change.

“Humanity’s fate cannot be left to an algorithm,” Secretary-General António Guterres warned at a Security Council’s open debate on AI and international peace and security on 24 September.

“Humans must always retain authority over life-and-death decisions.”

How AI is already being used

The debates in New York acknowledged that the deployment of AI in conflicts is no longer hypothetical, with several over the past year having seen documented use of AI militarily.

The Secretary-General’s June 2025 report entitled “Artificial intelligence in the military domain and its implications for international peace and security” lists several ways AI is already shaping military operations across various domains.

Target analysis: AI tools have been used to generate strike recommendations. While they significantly increase speed and allow commanders to process more data, they raise concerns about proportionality and human oversight, especially if humans defer excessively to automated outputs.

Identification of individuals: Some systems maintain databases linking people to armed groups, risking misidentification and undermining the principle of distinction if data are biased or incomplete.

Autonomous navigation: AI-enabled uncrewed systems have been documented guiding final approaches even under electronic interference. This improves accuracy, but it shifts critical judgments away from human operators, raising questions of meaningful human control.

Defensive systems: Several governments have announced AI-driven air defences that can autonomously detect, track and intercept threats. While such systems may save lives, they also risk unintended escalation in fast, machine-to-machine exchanges.

Ground robotics: AI-assisted robots have been deployed for reconnaissance, logistics and even combat roles, raising questions about accountability and liability when mistakes occur in complex environments.

Each of these applications illustrates the central challenge- from a military standpoint, AI can enhance efficiency, but without rigorous safeguards, it risks undermining international humanitarian law. As the Secretary-General stressed, Human control and judgment must be preserved in every use of force.

Risks that cannot be ignored

Leaders repeatedly warned of the dangers of allowing AI to outpace international humanitarian law. AI-enabled weapons may struggle to uphold the principles of distinction, proportionality and precaution.

Complex battlefields already test human judgment in distinguishing between combatants and civilians; for machines, the challenge is even greater, particularly in urban settings where civilians and fighters often intermingle.

Decision-making by algorithms can also be opaque and unpredictable, complicating accountability and increasing the risk of disproportionate or indiscriminate attacks.

The United Nations point to three broad categories of risk:

Technological: AI is only as reliable as its data. As United Nations Institute for Disarmament Affairs (UNIDIR) warns: “If an AI system has not encountered a certain scenario in training data, it may respond unpredictably in the real world … biased algorithms might misidentify civilians as combatants.”

Security: AI could speed up conflict dynamics. United Nations Office for Disarmament Affairs (UNODA) has cautioned against “flash wars”, in which algorithmic escalation intensifies a crisis before humans can intervene.

The proliferation of dual-use AI to non-state actors further expands the threat.

Legal and ethical: Accountability is blurred. International law holds states and individuals responsible, but, as the Secretary-General’s June 2025 report notes, AI may “obfuscate the linearity of this process.”

Addressing the General Assembly on 24 September, Ukrainian President Volodymyr Zelenskyy cautioned: “It is only a matter of time before drones are fighting drones, attacking critical infrastructure and targeting people fully autonomous – all by themselves … no human involved.”

A concern echoed by António Costa, President of the European Council: “Most dangerous, the development of lethal autonomous weapons systems threatens to remove human accountability from decisions of life and death. The risks are real: miscalculation, escalation, and proliferation. We must act before the tipping points become irreversible.”

The Secretary-General reiterated his call for a ban on lethal autonomous weapons systems operating without human control, with a view to concluding a legally binding instrument by next year. Adding that “until nuclear weapons are eliminated, any decision on their use must rest with humans — not machines.”

Mr Guterres had his own stark warning on these new unconventional hybrid threats, citing “AI-enabled cyber attacks” that “can disrupt or destroy critical infrastructure in minutes.” And the “ability to fabricate and manipulate audio and video threatens information integrity, fuels polarization, and can trigger diplomatic crises.”

Together, these warnings underscore a common theme of High-level Week: unchecked AI risks increasing civilian casualties, accelerating crises, lowering the threshold for conflict, and leaving humans less able to intervene before an escalation spirals out of control.

Opportunities alongside dangers

Delegates at the UN’s 80th General Assembly also recognised that AI could, if responsibly used, support humanitarian protection rather than undermine it. The United Nations Institute for Disarmament Affairs (UNIDIR) highlights in a July report, how AI applications could help militaries better uphold the principles of distinction, proportionality and precaution.

Command and control: decision-support tools can help commanders integrate proportionality assessments into planning, potentially reducing civilian harm.

Intelligence and surveillance: AI systems can enhance situational awareness by analysing vast data streams, enabling faster detection of violations or risks.

Logistics and training: predictive maintenance can prevent equipment failures, while realistic simulations can better prepare forces for complex environments.

Non-lethal support: AI can be utilised to enhance medical diagnostics for deployed personnel and to strengthen supply chain management, ensuring that humanitarian considerations are not overlooked.

These examples show that AI is not inherently destabilising; instead, outcomes depend on whether systems are designed and deployed responsibly.

As UNIDIR notes, “If developed, deployed and used responsibly, AI could increase operational effectiveness while offering new ways to mitigate risks and reduce harm.”

The challenge, therefore, is not only to draw red lines but also to foster responsible innovation that strengthens adherence to humanitarian law.

From principles to practice

High-level Week confirmed that momentum is growing towards international regulation. Within the framework of the Convention on Certain Conventional Weapons, governments continue to examine prohibitions and restrictions.

A two-tier approach is emerging: prohibit systems that cannot comply with International Humanitarian Law, and strictly regulate others to ensure meaningful human control, transparency and accountability.

The General Assembly’s resolution adopted in December 2024, had already affirmed that international law applies throughout the life cycle of military AI.

The Secretary-General’s June 2025 follow-up report compiled inputs from Member States, civil society and technical experts, reflecting broad concern that unchecked AI could destabilise global security and erode humanitarian protection.

“The question,” Guterres told the high-level meeting launching the Global Dialogue on AI Governance on 25 September, “is whether we will govern this transformation together — or let it govern us.”

A system-wide UN response

At this year’s General Assembly, speakers underlined the need for a coordinated UN system response. The Secretary-General set out four urgent priorities for governments and the Security Council:

First, ensure human control over the use of force, banning lethal autonomous weapons that operate without it. Second, build coherent global regulatory frameworks that require legal reviews, human accountability, safeguards, and transparency, especially in conflict settings. Third, protect information integrity, countering deepfakes and AI-driven disinformation that could inflame crises or obstruct humanitarian action. Fourth, close the AI capacity gap by investing in skills, data diversity, computing power and safety infrastructure so all countries can apply effective safeguards.

These measures, he stressed, are essential to prevent AI from destabilising peace and security while ensuring it is used to serve humanity.

Supporting these priorities are a series of existing UN system initiatives:

United Nations Educational, Scientific and Cultural Organization UNESCO’s Recommendation on the Ethics of Artificial Intelligence sets global standards for transparency and fairness.

The Office for Disarmament Affairs and the Group of Governmental Experts on Lethal Autonomous Weapons continue technical and legal work on prohibitions and restrictions.

UNIDIR provides analysis and roadmaps to guide States on responsible military AI governance.

The Global Dialogue on AI Governance was launched at UNGA80 as the first universal platform to shape international norms.

An Independent International Scientific Panel on AI is being established to deliver early warning, evidence and advice.

The Secretary-General has proposed innovative financing and a prospective Global Fund for AI Capacity Development to close global divides.

Together, these aim to ensure that AI strengthens peace and humanitarian protection, rather than undermining them.

Keeping humanity in the loop

The message from New York was clear: AI in war is not a future problem. It is already here, shaping conflicts and challenging the rules of war.

Without effective regulation, AI risks eroding humanitarian protection and destabilising peace. With proper governance, it could be harnessed to strengthen compliance with the laws of war and reduce human suffering.

“The question is not whether AI will influence international peace and security, but how we will shape that influence.” The Secretary-General said.

Related Articles

Back to top button