The Rise of Artificial Intelligence amidst War-like state

The integration of artificial intelligence into military operations is transforming the nature of warfare, raising significant ethical, legal, and strategic concerns. Recent developments highlight the urgency for international consensus on regulating AI-driven weaponry.
Autonomous weapon systems, capable of selecting and engaging targets without human intervention, are increasingly deployed in conflict zones. Despite their growing presence, there is a notable absence of binding international legal frameworks governing their use. Since 2014, discussions under the Convention on Conventional Weapons have sought to address this gap, but progress remains slow. Key nations, including the United States, Russia, China, and India, prefer national guidelines over global treaties, hindering the establishment of universal standards. This lack of consensus raises concerns about potential human rights violations and the onset of an AI arms race.
The rising prominence of AI in military strategy was starkly illustrated during the April–May 2025 standoff between India and Pakistan. Following a deadly terrorist attack in Pahalgam, which claimed the lives of over two dozen civilians, tensions escalated rapidly. The Indian government attributed the assault to groups operating from Pakistani territory, leading to a series of retaliatory actions including surgical strikes, diplomatic expulsions, and the temporary suspension of key bilateral agreements such as the Indus Waters Treaty.
What made this confrontation notably different from past escalations was the role of emerging technologies. According to defense analysts, both sides employed AI-driven surveillance systems to monitor cross-border movements in real time. Automated threat detection tools were reportedly used to track drone activity, while algorithmic targeting support systems helped Indian forces pinpoint suspected militant hideouts with unprecedented speed and accuracy.
As skirmishes simmered down following international mediation, observers noted that the use of AI in this regional conflict was likely a preview of how future military engagements, even between conventional rivals, could evolve with technological integration. The international community is increasingly vocal about the need to regulate AI in military contexts. In 2017, an open letter from the Future Life Institute, signed by 126 AI and robotics company leaders, urged the United Nations to prevent an arms race involving autonomous weapons. Despite such appeals, a comprehensive international legal framework remains elusive. The United Nations has set a 2026 deadline for establishing clear regulations, but achieving consensus among major powers continues to be a challenge.
As AI continues to reshape military capabilities, the imperative for robust ethical guidelines and legal frameworks becomes more pressing. Ensuring human accountability, transparency, and adherence to international humanitarian law is crucial to prevent misuse and protect civilian lives. The global community must collaborate to address these challenges and establish norms that govern the responsible use of AI in warfare.