The Ministry of Electronics and Information Technology in India unveiled the AI Governance Guidelines on November 5, 2025, outlining a regulatory framework for leveraging artificial intelligence in the country’s automotive sector. As the industry undergoes a significant transformation from traditional mechanics to a data-centric ecosystem, the guidelines aim to ensure that AI systems in autonomous vehicles (AVs) and Advanced Driver-Assistance Systems (ADAS) meet their promises of safety and efficiency. The core challenge lies in safeguarding human safety, accountability, and data privacy amid increased reliance on AI technology.
These guidelines introduce ‘Entity-Based’ and ‘Activity-Based’ regulations, focusing on three critical areas: Vehicle Safety and Liability, Manufacturing Efficiency, and Ethical Data Ecosystems. As India embraces these advanced technologies, the integration of AI into driving functions raises significant questions about safety and accountability, directly challenging the existing legal framework established by the Motor Vehicles Act of 1988. This outdated legislation primarily addresses human error, leaving a legal gap for AI-driven systems where human control is minimal or absent.
One of the key implications of the guidelines is the shift in liability from human drivers to manufacturers, software providers, and original equipment manufacturers (OEMs) as the automation level increases, particularly at SAE Level 3 and above. This transition aligns with global trends in product liability, where accidents stemming from poorly designed or malfunctioning software can implicate not only the manufacturer but also developers involved in deploying the AI system. The guidelines mandate a graded liability system, promoting transparency about the roles and responsibilities of various stakeholders in the AI value chain.
Furthermore, the ethical dilemmas of AI decision-making come to the forefront. AVs and advanced ADAS encounter unavoidable crash scenarios, often likened to the “trolley problem,” where algorithms must prioritize lives based on pre-set programming criteria. To build public trust and support investigations, the guidelines insist on transparency in decision-making algorithms, compelling manufacturers to provide clear explanations of choices made by AI systems. Additionally, the ‘Fairness & Equity’ principle mandates that manufacturers conduct bias testing to address potential disparities in AI performance across different demographics and road environments specific to India.
The guidelines also emphasize the importance of operational safety and compliance with international standards as essential components for the adoption of automotive AI. The Bureau of Indian Standards (BIS) and the Telecommunication Engineering Centre (TEC) are tasked with developing necessary safety standards and certifications for AVs and ADAS. The establishment of the AI Safety Institute (AISI) will play a crucial role in enforcing standardized safety guidelines and conducting rigorous testing before deployment, ensuring that governance is informed by scientific evidence.
In addition to focusing on safety and ethics, the guidelines promote broader digital transformation within the Indian automotive sector. Sustainability initiatives encourage manufacturers to optimize resource usage using AI, aligning with environmental goals. The framework also addresses privacy concerns surrounding connected vehicles, reinforcing the need for strict data protection measures in compliance with the Information Technology Act of 2000 and the upcoming Digital Personal Data Protection Act of 2023. Moreover, recognizing the industry’s current shortage of AI talent, the guidelines support government-backed programs aimed at reskilling the workforce and nurturing future experts in ADAS and AV safety.
Ultimately, the AI Governance Guidelines signify a collaborative effort toward a responsible and ethically sound automotive industry in India. As the sector evolves, the guidelines underscore the necessity for technological advancements to be measured against their impact on human safety and ethical integrity, ensuring a balance between innovation and accountability in the age of artificial intelligence.
See also
Europe’s AI Ethics Platforms Market Forecast to Soar to $45.3 Billion by 2035 Amid Regulatory Shift
India’s Privacy Law: Calls for Real-Time Accountability as AI Data Demands Shift
UK Government Launches AI Growth Lab to Accelerate Adoption Amid Regulation Hurdles
Florida Lawmakers Advance AI Bill of Rights Amid National Regulation Debate
Trump Proposes Executive Order to Block State AI Regulations Amid Colorado Law Delays



















































