NHTSA Opens Probe into Tesla FSD System in 2.9 Million Vehicles

In October 2025, the U.S. National Highway Traffic Safety Administration (NHTSA) announced it was launching an investigation into 2.88 million Tesla vehicles equipped with the Full Self-Driving (FSD) system, following dozens of reports alleging that FSD activated driving behaviors that violated traffic laws—such as running red lights or making dangerous lane changes. 

This development is a potential inflection point for Tesla’s autonomy ambitions. For Tesla owners, prospective buyers, and stakeholders in the U.S. and Europe, the probe raises critical questions: How safe is FSD in real-world driving? What liability might owners face? Could this escalate to a mass recall or regulatory restrictions? And how will Tesla respond to restore trust?

In this article, we’ll dive deep into the technical, regulatory, user-centric, and market dimensions of this probe. We aim to provide a balanced, well-substantiated analysis so that readers can better understand the stakes, risks, and implications of Tesla’s FSD future.


Chapter 1: What Is FSD — Technology, Promises, and Limits

1.1 FSD vs Autopilot: What’s the Difference?

Tesla’s driver assistance and automation stack has evolved over time. Its earlier Autopilot (and Enhanced Autopilot) suite provides assisted lane-centering, adaptive cruise control, and some more advanced maneuvers (in limited conditions). But Tesla markets FSD (Full Self-Driving) as a more ambitious layer intended to navigate city streets, handle intersections, change lanes, and drive to destinations with supervision.

Importantly, FSD is not true autonomy (i.e., it is not SAE Level 4 or 5); Tesla labels it as a “supervised” system, meaning that the driver must remain attentive, ready to intervene, and legally responsible. Despite marketing that sometimes suggests autonomy, in practice FSD is a sophisticated Level 2 / driver-assist system with boundaries and fallbacks.

From a technical standpoint, FSD combines:

  • Cameras (Tesla relies heavily on a vision-only approach, with no lidar)

  • Neural-network perception modules (recognizing lanes, traffic lights, signs, obstacles)

  • Motion planning and trajectory prediction

  • Control modules (steering, acceleration, braking)

  • Safety fallback logic to disengage or alert when margin is insufficient

Over the years, Tesla has incrementally improved FSD via over-the-air (OTA) software updates, expanding its operational design domain (ODD) to more complex scenarios, but always accompanied by caveats and disclaimers.

1.2 Tesla’s Positioning and Marketing of FSD

FSD occupies a central role in Tesla’s long-term vision. Elon Musk often frames it as transformative—both for consumer convenience and for potential robotaxi / ride-hailing business models, where customers (or even owners) might generate revenue by allowing Tesla to remotely use their vehicle.

This aspirational narrative works in Tesla’s favor for branding and valuation. FSD also reinforces a premium tech image. However, the tension arises when marketing language ambiguously blurs the boundary between “driver-assist” and “autonomy,” which creates expectations—and regulatory scrutiny.

Tesla’s rollout of FSD has been gradual: first via closed testing (early-access), then to a broader beta group, gradually enabling features under stricter constraints. Many features are “opt-in,” and Tesla continually publishes release notes, but transparency has often been limited (e.g. limited disclosure of internal test data, crash logs).

1.3 Technical Challenges and Limitations

Despite progress, FSD faces several structural challenges:

  1. Edge cases and rare events
    Real-world driving includes immense variability—unusual lighting, road debris, ambiguous signage, temporary construction, unpredictable human behavior. Neural nets can struggle with cases outside their training distribution.

  2. Perception under adverse conditions
    Rain, fog, glare, shadows, snow, and low-light conditions degrade camera performance. Detecting subtle objects, pedestrians, or emergency vehicles in extreme conditions is especially hard.

  3. Complex decision making / rule interpretation
    Executing safe lane changes, merging, unprotected left turns, or negotiating with human drivers involves predicting intentions and legal judgment calls. Even humans make mistakes in ambiguous scenarios.

  4. Latency, compute constraints, and safety margins
    The system must predict fast and reliably. Overly cautious behavior (e.g. hard braking unnecessarily) can frustrate occupants; overly aggressive behavior risks safety.

  5. Fallbacks / disengagements
    The system must gracefully hand back control to the driver when it cannot safely continue. Ensuring the driver is attentive and capable to take over is nontrivial (distracted drivers, delayed reaction).

  6. Legal / operating domain constraints
    FSD must comply with local traffic laws (which differ by city, state, country). Differences in signage, signaling conventions, lane marking standards, and enforcement practices can complicate deployment across jurisdictions.

Because of these limitations and potential failures, regulators and safety advocates emphasize cautious deployment, auditing, and clear liability frameworks.


Chapter 2: The NHTSA Investigation — Scope, Allegations, and Process

2.1 Allegations & Triggers

The NHTSA probe is a Preliminary Evaluation (PE) into 2.88 million Tesla vehicles that support FSD, triggered by 58 incident reports, including 14 crashes and 23 injuries

Among the most serious claims:

  • Some Teslas using FSD are alleged to have driven through red lights causing collisions. Six of these incidents reportedly occurred at the same intersection in Joppa, Maryland, hinting at a potentially repeatable software error or environmental trigger. 

  • Other reports involve unsafe lane changes (moving into opposing lanes or crossing double yellows) in ways that violate traffic laws.

  • There are also complaints about interactions with railroad crossings or left-turn behaviors that may not conform to standard rules.

  • The scope is broad: NHTSA seeks to “assess the scope, frequency, and potential safety consequences of FSD executing driving maneuvers that constitute traffic safety violations.” 

Tesla has reportedly already pushed a software update in some locations connected to one of the problematic intersections, but the agency is likely to probe deeper. 

2.2 Affected Vehicles & Mapping

Though the figure quoted is 2.88 million, practically that covers nearly all Tesla vehicles sold in the U.S. (and possibly some export models) capable of receiving FSD software. The probe is not restricted to a single model: Model S, Model 3, Model X, and Model Y variants with FSD are presumably included.

The key question is whether only a subset of vehicles (e.g. those with a certain software version, hardware revision, or operating in certain geographies) are implicated, or whether the problem is more systemic.

2.3 Investigation Process: From PE to Recall

The typical NHTSA defect investigation pathway:

  1. Preliminary Evaluation (PE)
    This is an initial assessment to collect data, assess plausibility, and decide whether it should escalate to the next stage. The current probe is at this level.

  2. Engineering Analysis (EA)
    If the PE finds evidence of a safety defect, NHTSA may escalate to a formal engineering analysis. This involves deeper technical review, requiring automakers to furnish internal data, logs, simulation models, etc.

  3. Recall / Corrective Action
    If a defect is confirmed, NHTSA may require a recall or corrective fix, which for software-based systems often means an OTA update or patch.

  4. Ongoing Monitoring & Compliance
    Post-remedy, NHTSA monitors whether the fix resolves the problems or if new issues emerge.

Given that Tesla’s systems are software-based, the path to recall might lean toward mandatory software updates. But if the defect is fundamental to the algorithm or sensor hardware, recall demands could become more burdensome.

2.4 Regulatory Stakes & Precedents

This is not Tesla’s first encounter with NHTSA scrutiny over autonomy/driver assist:

  • In 2023, Tesla recalled over 2 million vehicles to patch Autopilot safety issues (e.g. inattentive driver warnings, phantom braking, etc.). 

  • In late 2024, NHTSA opened a PE into 2.4 million Tesla vehicles regarding FSD in low-visibility conditions after multiple crashes, including one pedestrian fatality. 

  • Tesla is also under additional investigation for delayed crash reporting: under the 2021 Standing General Order (SGO 2021-01), auto manufacturers must report crashes involving advanced driver assistance or autonomous systems within 1–5 days. But Tesla has allegedly delayed reports by months. 

  • In January 2025, NHTSA launched a PE into the “Actually Smart Summon” remote driving feature over safety concerns. 

These prior interactions establish precedent that NHTSA treats Tesla’s autonomy claims with skepticism and is willing to demand transparency and remediation.


Chapter 3: Tesla’s Past Regulatory / Safety Controversies — Lessons Learned

3.1 Historical Crash Cases & Patterns

Over the past decade, dozens of crashes have been linked (directly or indirectly) to Tesla’s Autopilot / FSD systems. According to compiled lists:

  • Hundreds of non-fatal incidents involving Autopilot have been recorded. 

  • As of 2024, at least 59 fatal crashes involving Tesla driver-assist systems have been documented, of which many were connected to Autopilot misuse or system misinterpretation. 

  • Because Tesla often redacts or withholds crash data, independent analyses and media investigations have filled gaps, pointing to patterns: failure to detect cross-traffic, poor performance in low light, and overreliance on visual cues. 

  • Notably, multiple incidents occurred near emergency vehicles (police, ambulance). A research paper (PaniCar) showed that the activation of emergency vehicle lighting can produce flare artifacts that confuse object detectors, reducing their confidence and potentially causing them to miss or misinterpret objects. 

These precedents indicate that limitations in perception and edge-case handling have been recurring threats.

3.2 Tesla’s Typical Responses: Software Updates & Disclaimers

Tesla’s strategy in prior safety incidents has often been:

  • Issue OTA updates: e.g. tighten constraints, adjust sensitivity, add warnings, reduce autonomy in ambiguous zones

  • Fine-tune driver monitoring: more stringent alerts, steering torque checks, visual / auditory warnings

  • Disclaim responsibility: emphasize that drivers must remain attentive and that FSD is a supplemental aid

  • Partial silence / limited disclosure: often giving only minimal public detail on root causes or internal logs

  • Selective deployment or rollback of features in problematic regions

This approach mitigates reputational damage and is cost-effective (software patch vs hardware recall). But if a defect is systemic or severe, regulators may not accept incremental fixes.

3.3 Criticism of Reporting Transparency

Tesla has long been criticized for opacity in crash data disclosures:

  • Under SGO 2021-01, Tesla is required to report crashes involving advanced driver-assist systems within 1–5 days. But NHTSA now alleges that Tesla delayed many such reports by months. 

  • Tesla claims that internal data collection errors caused delays, and it has “fixed” the issue, but NHTSA is auditing compliance. 

  • Tesla has asked NHTSA to withhold (i.e. redact) some responses regarding its Robotaxi service rollout, citing proprietary interests. 

  • More broadly, Tesla’s reluctance to release full logs, training data, failure cases, and projections constrains external validation and scrutiny.

These practices have contributed to regulatory wariness and public skepticism.

3.4 Lessons & Implications

Key takeaways from Tesla’s history:

  • Software-based fixes are often the first (and preferred) remedy path, but they may not fully address deeper algorithmic deficiencies

  • Recurrent incidents in similar scenarios (e.g. same intersection) suggest systemic bugs or insufficient generalization

  • Lack of transparency erodes trust and draws regulatory demands for greater oversight

  • Edge-case failures, perception under stress, and corner-case behavior remain core vulnerabilities

Those lessons raise the stakes for the current probe: if behaviors violating traffic laws are confirmed, Tesla’s typical responses may not satisfy regulators or public expectations.


Chapter 4: Implications for Tesla Owners / Buyers

4.1 Safety and Confidence Considerations

For current Tesla owners who have paid for FSD or used it actively, this probe may erode confidence. Some may worry:

  • Will FSD be disabled or restricted pending remediation?

  • Might Tesla push updates that degrade certain behaviors to stay safe (e.g. more conservative driving), reducing usability?

  • In disputed incidents, will Tesla disclaim or limit features?

Owners will watch closely for official communications, update schedules, and whether local authorities impose usage constraints.

4.2 Liability, Insurance, and Legal Risk

One of the most ambiguous areas is legal accountability. If a crash occurs while FSD is active:

  • Could the driver be held liable (for failing to intervene)?

  • Or could Tesla bear some responsibility (for software defect)?

  • Insurance policies may adjust: insurers might raise premiums or impose clauses for vehicles with “autonomous” features.

  • In Europe, liability frameworks differ: EU directives, national laws, and recent proposals around automated-driving liability could influence outcomes.

Tesla owners in both the U.S. and Europe must understand the local laws, terms in purchase agreements, and how their usage of FSD could influence claims.

4.3 Resale Value, Warranty, and Upgrades

  • Resale / trade-in value: negative headlines and regulatory risk could depress demand for cars with FSD licenses. Buyers may discount such valuations.

  • Warranty / service terms: Tesla might restrict or modify servicing policies or disclaim coverage for FSD-related incidents.

  • Upgradability: Owners may wonder whether FSD features will remain upgradeable or whether Tesla will limit new updates for existing hardware.

Prospective buyers may hesitate to pay for FSD until the uncertainty clears.

4.4 Recommendations for Owners / Buyers

Here are some actionable guidelines:

  1. Stay current with Tesla’s communications and software updates — read release notes attentively.

  2. Use FSD conservatively in complex environments (urban intersections, poor visibility) until more confidence is restored.

  3. Maintain driver vigilance — always be ready to intervene; avoid overreliance or complacency.

  4. Document abnormal behavior — if FSD behaves unexpectedly, log the time, location, video, and report to Tesla and local regulators.

  5. Consult insurance / legal professionals — understand how your policy treats automated systems and the jurisdiction’s liability framework.

  6. Delay FSD purchase decisions (if not already enabled) until more clarity emerges.

These steps help manage risk exposure.


Chapter 5: Impact on Tesla Business & Market

5.1 Market & Investor Reaction

The announcement of the NHTSA probe triggered a modest decline in Tesla’s stock, as investors digest the possibility of regulatory friction, recalls, or reputational damage. 

Because FSD is central to Tesla’s narrative of future growth (e.g. robotaxi, recurring software revenue), scrutiny that devalues that narrative can disproportionately affect perceptions of upside. Analysts may revise future earnings models downward, especially regarding autonomy and ride-hailing prospects.

5.2 Strategic Autonomy / Robotaxi Business at Risk

Tesla has pinned much of its long-term upside on scaling autonomous ride-hailing services (Robotaxi). But this probe poses headwinds:

  • It raises doubt about whether FSD is reliable or safe enough to serve as the backbone of robotaxi networks

  • Regulators may delay or restrict deployment until validated

  • If systemic defects are found, Tesla might need to redesign or rewind features

  • This probe may embolden rivals (Waymo, Cruise, others) to capture regulatory or public trust advantages

Indeed, NHTSA is also demanding clarifications on how Tesla plans to deploy Robotaxi services (e.g. in Austin), especially regarding how much overlap there is with current FSD systems. 

5.3 Competitive Landscape & Leverage

Tesla’s autonomy narrative is a differentiator. If this narrative is shaken, competitive EV makers and autonomy startups might gain the upper hand in branding and risk perception.

On the flip side, Tesla could respond by:

  • Doubling investment in safety, testing and validation

  • Open more transparency, crash logs, third-party audits

  • Partner with regulators to validate in controlled settings

  • Provide assurance to buyers via extended safety guarantees

How Tesla navigates this moment will influence whether it reinforces or weakens its lead.

5.4 Possible Tesla Responses & Paths Forward

Some possible responses Tesla might take:

  1. Cooperate fully, open transparency: share internal data, logs, test results to help assuage regulatory concerns

  2. Purge / rollback specific software modules implicated in incidents

  3. Fast OTA patches / constrained updates to limit risky behaviors until further validation

  4. Reassure customers via communication campaigns, extended liability or safety guarantees

  5. Pause expansion of new features or robotaxi in U.S. / Europe until clearance

  6. Push regulatory engagement / lobbying to shape autonomy oversight frameworks

Tesla’s ability to walk between innovation and liability will come under stronger test than ever.


Conclusion

This NHTSA probe is not just another regulatory headwind—it strikes at a foundational pillar of Tesla’s narrative: that its FSD system is safe, scalable, and a pathway to autonomy-led business models. The size of the probe, the nature of allegations (running red lights, dangerous lane changes), and the precedent of prior incidents elevate the stakes significantly.

For Tesla owners, prospective buyers, and stakeholders in Europe and the U.S., the probe signals a moment to reassess assumptions, adopt caution, and demand clarity. While Tesla may well remediate the issues via software updates or targeted fixes, the manner and speed of its response will be as consequential as the technical fixes themselves.

As the investigation unfolds, key indicators to watch are:

  • Whether the probe is escalated to engineering analysis

  • Tesla’s degree of cooperation and openness

  • Whether a recall or forced patch is mandated

  • Changes in user confidence, uptake of FSD, and resale dynamics

  • The effect on Tesla’s robotaxi ambitions and stock valuation

At its heart, this moment forces a reckoning: can Tesla back its ambitious claims with robust transparency, safety, and regulatory alignment? Or will the autonomy dream collide with the unforgiving realities of real-world driving and public oversight?


FAQ

Q1: Could this investigation lead to FSD being disabled or fully recalled?
Yes, if NHTSA finds the driving behaviors pose a safety defect, the probe may escalate to engineering analysis, and then require a recall or mandatory software patch. That could force Tesla to disable or restrict certain FSD functionalities until resolved.

Q2: If I own a Tesla with FSD, am I at legal risk in case of a crash?
Potentially. Liability may depend on jurisdiction, local laws, the degree of driver intervention, and whether the crash can be shown to stem from software fault. Insurance firms may also scrutinize FSD usage. You should consult legal / insurance advisors.

Q3: Should I cancel or delay buying FSD now?
It might be prudent to wait for more clarity. The uncertainty around future software updates, liability, and resale value make paying for FSD today riskier than before the probe.

Q4: Are other Tesla features also being probed?
Yes. Tesla is under investigation for delayed crash reporting (SGO compliance) and for remote driving (“Actually Smart Summon”) features. 

Q5: Does this probe only affect U.S. owners or also Europe?
The NHTSA probe is U.S.-centric. But reputational and technical risks (e.g. negative press, software constraints) can spill over to Europe. Also, European regulators may take cues or launch their own reviews.

Tilbake til bloggen
0 kommentarer
Legg inn en kommentar
Vær oppmerksom på at kommentarer må godkjennes før de blir publisert

Handlekurven Din

Laster