TLDR: India is actively pursuing ethical and strategic leadership in the application of AI within its military, with the Defence Research and Development Organisation (DRDO) introducing the ETAI (Evaluating Trustworthiness in AI) framework in 2024. This framework focuses on reliability, safety, transparency, fairness, and privacy. While institutional bodies like the Defence AI Council (DAIC) and Defence AI Project Agency (DAIPA) have been established, the nation faces the crucial challenge of transitioning from theoretical principles to enforceable practices. India aims to not only strengthen its domestic structures but also emerge as a normative leader for the Global South in responsible military AI governance.
India is making significant strides towards establishing itself as a global leader in the ethical and strategic application of Artificial Intelligence (AI) within its military. This ambitious path is underscored by a commitment to integrate cutting-edge technology with principled governance, as detailed in recent analyses by experts like Zain Pandit and Aashna Nahar of JSA Advocates and Solicitors.
At the core of India’s approach is the ETAI (Evaluating Trustworthiness in AI) framework, introduced by the Defence Research and Development Organisation (DRDO) in 2024. This framework is built upon five fundamental pillars: reliability, safety, transparency, fairness, and privacy. The objective of ETAI is to ensure that military personnel, decision-makers, and the public can place unwavering trust in the AI technologies deployed for national security. Reliability demands accurate system performance even in chaotic battlefield scenarios, while safety focuses on guardrails against unintended consequences, particularly with autonomous systems.
To reinforce the ETAI framework, India has established key decision-making bodies, including the Defence AI Council (DAIC) and the Defence AI Project Agency (DAIPA). These entities are tasked with guiding projects, coordinating research, and providing direction in a rapidly evolving technological landscape. However, current assessments indicate that these bodies primarily function as conveners, bringing stakeholders together without the power to compel compliance, highlighting a critical gap between principle and practice.
Experts suggest that India can draw valuable lessons from international counterparts, such as the United States Department of Defense (DoD). The DoD’s Responsible AI principles—responsibility, equity, traceability, reliability, and governability—are translated into clear operational practices, with oversight resting with the Chief Digital and AI Office. This ensures accountability through impact assessments and trustworthiness evaluations, fostering a culture where AI systems are not opaque ‘black boxes’ but reliable tools with transparent decision-making processes.
For India, the crucial next step involves augmenting its existing framework with enforceable mechanisms. This includes mandating traceability in every deployed system, making human override a non-negotiable aspect, and, most importantly, creating a statutory Defence AI Regulatory Authority. Such an authority would possess the powers to certify, investigate, and enforce compliance, thereby giving substance to the ethical frameworks already in place. Without robust enforcement, India risks having strong ideas but weak execution.
Beyond domestic reforms, India sees a significant opportunity to lead globally, particularly for the Global South. Many developing nations aspire to integrate AI into their defense strategies but lack the resources to design governance frameworks from scratch. India is uniquely positioned to provide templates that balance innovation with restraint, offering a model that combines advanced technology with ethical considerations. This leadership role would involve advocating for binding agreements in international forums like the UNCCW (Convention on Certain Conventional Weapons) to regulate Lethal Autonomous Weapons Systems (LAWS), ensuring compliance with International Humanitarian Law.
Also Read:
- India to Equip Bureaucrats with AI Skills for Enhanced Future Governance
- India Designates IIT-Madras as UN Artificial Intelligence Centre of Excellence
Further investments are needed in human capacity, training and upskilling the defense and technical workforce to use AI responsibly, blending technical skill with ethical awareness. Ethical and risk-impact assessments should become mandatory for all defense AI procurement. Additionally, India must invest in infrastructure for stress-testing AI systems. Initiatives like AIRAWAT, which already provides cloud power and datasets for civilian AI, could be adapted into a defense-specific platform to simulate battlefield pressures and test AI systems under extreme conditions. This comprehensive approach, combining ambition with concrete reforms, is vital for India to ensure that ethical safeguards are not merely discussion points but foundational elements of its strategic strength in defense modernization.