Federal safety regulators have escalated their investigation into Tesla’s Full Self-Driving (FSD) software, citing at least 80 instances where the AI system breached traffic laws by running red lights and veering into wrong lanes. The National Highway Traffic Safety Administration (NHTSA) revealed this expanded inquiry in a formal letter to the automaker, highlighting a 60% increase in documented violations since October. This surge raises critical concerns regarding the safety of Tesla’s advanced driver assistance technology, which is now deployed in millions of vehicles.
The timing of this escalation poses a significant challenge for Tesla. The NHTSA’s announcement coincided with remarks from CEO Elon Musk on social media platform X, where he claimed that the latest iteration of FSD would allow drivers to text while the system operates—a statement that is illegal in most states. NHTSA officials have refrained from commenting on Musk’s assertion, amplifying scrutiny on the company’s safety protocols.
This investigation marks a troubling development for Tesla, as the documented violations paint a concerning picture. The FSD system has been implicated in running red lights and drifting into opposing lanes, with incidents reported through 62 customer complaints, 14 submitted by Tesla itself, and four identified by media sources. This notable increase from the roughly 50 violations noted when NHTSA initiated its inquiry in October highlights the growing scope of the issue.
The expanded investigation goes beyond merely tallying violations. NHTSA’s Office of Defects Investigation is now examining whether Tesla’s software can “accurately detect and appropriately respond to traffic signals, signs, and lane markings.” Moreover, regulators are scrutinizing whether the system offers adequate warnings to drivers in the event of a malfunction. These questions are particularly pressing given the widespread geographic distribution of the reported incidents.
In its initial phase, the October investigation focused primarily on a specific intersection in Joppa, Maryland, which Tesla asserted had been fixed. However, the new violations have emerged across various locations, underscoring the systemic nature of the problem. Tesla has not disclosed the specific sites of these recent incidents, and the company tends to heavily redact its safety reports submitted to federal regulators, limiting transparency.
This marks the second significant federal probe into the safety record of FSD. NHTSA previously launched a separate investigation in October 2024, examining how the software performs under low-visibility conditions, such as fog and extreme sunlight. The simultaneous nature of these investigations suggests that regulators are adopting a comprehensive stance toward evaluating Tesla’s most advanced driver assistance technology, particularly as its deployment expands.
As Tesla continues rolling out FSD to a broader market, the implications of these investigations could have far-reaching effects on public perception and regulatory frameworks for autonomous driving technologies. The outcome of NHTSA’s inquiries may not only influence Tesla’s operational strategies but also set a precedent for how similar technologies are regulated in the future, as the automotive industry grapples with the challenges of integrating advanced AI systems into daily driving.
See also
China Proposes Global AI Governance Initiative, Enhancing UN’s Role in 729 Projects
ONR Launches £3.6M Regulatory Sandbox to Test AI in UK Nuclear Safety by 2026
CISA Releases AI Integration Guide for Operational Tech, Emphasizes Safety and Security
AI’s Growing Energy Demand: Companies Must Elevate Governance to Mitigate Risks




















































