AI in conflict: Keeping humanity in control
By R Anil Kumar
It is already here, shaping conflicts and challenging the rules of war. Without effective regulation, AI risks eroding humanitarian protection and destabilising peace. With proper governance, it could be harnessed to strengthen compliance with the laws of war and reduce human suffering
New York. October 8, 2025
A debate moves from fiction to reality
For decades, popular culture has imagined the perils of AI driven machines influencing decisions of war. From Colossus: The Forbin Project (1970) which portrayed a supercomputer seizing nuclear command, to the Terminator series (from 1984) that gave the world “Skynet”, a defence system that becomes self-aware and exterminates humanity.
At the eightieth session of the United Nations General Assembly (UNGA80), such scenarios no longer seemed far removed from fiction.
Artificial intelligence (AI) is already present in today’s conflicts, and world leaders used this year’s High-Level Week to debate how to keep the rules of war intact in the face of rapid technological change.
“Humanity’s fate cannot be left to an algorithm,” Secretary-General António Guterres warned at a Security Council’s open debate on AI and international peace and security on 24 September.
“Humans must always retain authority over life-and-death decisions.”
How AI is already being used
The debates in New York acknowledged that the deployment of AI in conflicts is no longer hypothetical, with several over the past year having seen documented use of AI militarily.
The Secretary-General’s June 2025 report entitled “Artificial intelligence in the military domain and its implications for international peace and security” lists several ways AI is already shaping military operations across various domains.
Target analysis: AI tools have been used to generate strike recommendations. While they significantly increase speed and allow commanders to process more data, they raise concerns about proportionality and human oversight, especially if humans defer excessively to automated outputs.
Identification of individuals: Some systems maintain databases linking people to armed groups, risking misidentification and undermining the principle of distinction if data are biased or incomplete.
Autonomous navigation: AI-enabled uncrewed systems have been documented guiding final approaches even under electronic interference. This improves accuracy, but it shifts critical judgments away from human operators, raising questions of meaningful human control.
Defensive systems: Several governments have announced AI-driven air defences that can autonomously detect, track and intercept threats. While such systems may save lives, they also risk unintended escalation in fast, machine-to-machine exchanges.
Ground robotics: AI-assisted robots have been deployed for reconnaissance, logistics and even combat roles, raising questions about accountability and liability when mistakes occur in complex environments.
Each of these applications illustrates the central challenge- from a military standpoint, AI can enhance efficiency, but without rigorous safeguards, it risks undermining international humanitarian law. As the Secretary-General stressed, “Human control and judgment must be preserved in every use of force.”
Risks that cannot be ignored
Leaders repeatedly warned of the dangers of allowing AI to outpace international humanitarian law. AI-enabled weapons may struggle to uphold the principles of distinction, proportionality and precaution.
Complex battlefields already test human judgment in distinguishing between combatants and civilians; for machines, the challenge is even greater, particularly in urban settings where civilians and fighters often intermingle.
Decision-making by algorithms can also be opaque and unpredictable, complicating accountability and increasing the risk of disproportionate or indiscriminate attacks.
The United Nations point to three broad categories of risk:
Technological: AI is only as reliable as its data. As United Nations Institute for Disarmament Affairs (UNIDIR) warns: “If an AI system has not encountered a certain scenario in training data, it may respond unpredictably in the real world … biased algorithms might misidentify civilians as combatants.”
Security: AI could speed up conflict dynamics. United Nations Office for Disarmament Affairs (UNODA) has cautioned against “flash wars”, in which algorithmic escalation intensifies a crisis before humans can intervene.
The proliferation of dual-use AI to non-state actors further expands the threat.
Legal and ethical: Accountability is blurred. International law holds states and individuals responsible, but, as the Secretary-General’s June 2025 report notes, AI may “obfuscate the linearity of this process.”
Addressing the General Assembly on 24 September, Ukrainian President Volodymyr Zelenskyy cautioned: “It is only a matter of time before drones are fighting drones, attacking critical infrastructure and targeting people fully autonomous – all by themselves … no human involved.”
A concern echoed by António Costa, President of the European Council: “Most dangerous, the development of lethal autonomous weapons systems threatens to remove human accountability from decisions of life and death. The risks are real: miscalculation, escalation, and proliferation. We must act before the tipping points become irreversible.”
The Secretary-General reiterated his call for a ban on lethal autonomous weapons systems operating without human control, with a view to concluding a legally binding instrument by next year. Adding that “until nuclear weapons are eliminated, any decision on their use must rest with humans — not machines.”
Mr Guterres had his own stark warning on these new unconventional hybrid threats, citing “AI-enabled cyber attacks” that “can disrupt or destroy critical infrastructure in minutes.” And the “ability to fabricate and manipulate audio and video threatens information integrity, fuels polarization, and can trigger diplomatic crises.”
Together, these warnings underscore a common theme of High-level Week: unchecked AI risks increasing civilian casualties, accelerating crises, lowering the threshold for conflict, and leaving humans less able to intervene before an escalation spirals out of control.
Opportunities alongside dangers
Delegates at the UN’s 80th General Assembly also recognised that AI could, if responsibly used, support humanitarian protection rather than undermine it. The United Nations Institute for Disarmament Affairs (UNIDIR) highlights in a July report, how AI applications could help militaries better uphold the principles of distinction, proportionality and precaution.
Command and control: decision-support tools can help commanders integrate proportionality assessments into planning, potentially reducing civilian harm.
Intelligence and surveillance: AI systems can enhance situational awareness by analysing vast data streams, enabling faster detection of violations or risks.
Logistics and training: predictive maintenance can prevent equipment failures, while realistic simulations can better prepare forces for complex environments.
Non-lethal support: AI can be utilised to enhance medical diagnostics for deployed personnel and to strengthen supply chain management, ensuring that humanitarian considerations are not overlooked.
These examples show that AI is not inherently destabilising; instead, outcomes depend on whether systems are designed and deployed responsibly.
As UNIDIR notes, “If developed, deployed and used responsibly, AI could increase operational effectiveness while offering new ways to mitigate risks and reduce harm.”
The challenge, therefore, is not only to draw red lines but also to foster responsible innovation that strengthens adherence to humanitarian law.
From principles to practice
High-level Week confirmed that momentum is growing towards international regulation. Within the framework of the Convention on Certain Conventional Weapons, governments continue to examine prohibitions and restrictions.
A two-tier approach is emerging: prohibit systems that cannot comply with International Humanitarian Law, and strictly regulate others to ensure meaningful human control, transparency and accountability.
The General Assembly’s resolution adopted in December 2024, had already affirmed that international law applies throughout the life cycle of military AI.
The Secretary-General’s June 2025 follow-up report compiled inputs from Member States, civil society and technical experts, reflecting broad concern that unchecked AI could destabilise global security and erode humanitarian protection.
“The question,” Guterres told the high-level meeting launching the Global Dialogue on AI Governance on 25 September, “is whether we will govern this transformation together — or let it govern us.”
A system-wide UN response
At this year’s General Assembly, speakers underlined the need for a coordinated UN system response. The Secretary-General set out four urgent priorities for governments and the Security Council:
First, ensure human control over the use of force, banning lethal autonomous weapons that operate without it. Second, build coherent global regulatory frameworks that require legal reviews, human accountability, safeguards, and transparency, especially in conflict settings. Third, protect information integrity, countering deepfakes and AI-driven disinformation that could inflame crises or obstruct humanitarian action. Fourth, close the AI capacity gap by investing in skills, data diversity, computing power and safety infrastructure so all countries can apply effective safeguards.
These measures, he stressed, are essential to prevent AI from destabilising peace and security while ensuring it is used to serve humanity.
Supporting these priorities are a series of existing UN system initiatives:
United Nations Educational, Scientific and Cultural Organization UNESCO’s Recommendation on the Ethics of Artificial Intelligence sets global standards for transparency and fairness.
The Office for Disarmament Affairs and the Group of Governmental Experts on Lethal Autonomous Weapons continue technical and legal work on prohibitions and restrictions.
UNIDIR provides analysis and roadmaps to guide States on responsible military AI governance.
The Global Dialogue on AI Governance was launched at UNGA80 as the first universal platform to shape international norms.
An Independent International Scientific Panel on AI is being established to deliver early warning, evidence and advice.
The Secretary-General has proposed innovative financing and a prospective Global Fund for AI Capacity Development to close global divides.
Together, these aim to ensure that AI strengthens peace and humanitarian protection, rather than undermining them.
Keeping humanity in the loop
The message from New York was clear: AI in war is not a future problem. It is already here, shaping conflicts and challenging the rules of war.
Without effective regulation, AI risks eroding humanitarian protection and destabilising peace. With proper governance, it could be harnessed to strengthen compliance with the laws of war and reduce human suffering.
“The question is not whether AI will influence international peace and security, but how we will shape that influence.” The Secretary-General said.