Computer vision

Tesla Autopilot Was Uniquely Risky—and May Still Be

Tesla Autopilot Was Uniquely Risky—and May Still Be

A federal report published today found that Tesla’s Autopilot system was involved in at least 13 fatal crashes in which drivers misused the system in ways the automaker should have foreseen—and done more to prevent. Not only that, but the report called out Tesla as an “industry outlier” because its driver assistance features lacked some of the basic precautions taken by its competitors. Now regulators are questioning whether a Tesla Autopilot update designed to fix these basic design issues and prevent fatal incidents has gone far enough.

These fatal crashes killed 14 people and injured 49, according to data collected and published by the National Highway Traffic Safety Administration, the federal road-safety regulator in the US.

At least half of the 109 “frontal plane” crashes closely examined by government engineers—those in which a Tesla crashed into a vehicle or obstacle directly in its path—involved hazards visible five seconds or more before impact. That’s enough time that an attentive driver should have been able to prevent or at least avoid the worst of the impact, government engineers concluded.

In one such crash, a March 2023 incident in North Carolina, a Model Y traveling at highway speed struck a teenager while he was exiting a school bus. The teen was airlifted to a hospital to treat his serious injuries. The NHTSA concluded that “both the bus and the pedestrian would have been visible to an attentive driver and allowed the driver to avoid or minimize the severity of this crash.”

Government engineers wrote that, throughout their investigation, they “observed a trend of avoidable crashes involving hazards that would have been visible to an attentive driver.”

Tesla, which disbanded its public affairs department in 2021, did not respond to a request for comment.

Damningly, the report called Tesla “an industry outlier” in its approach to automated driving systems. Unlike other automotive companies, the report says, Tesla let Autopilot operate in situations it wasn’t designed to, and failed to pair it with a driver engagement system that required its users to pay attention to the road.

Regulators concluded that even the Autopilot product name was a problem, encouraging drivers to rely on the system rather than collaborate with it. Automotive competitors often use “assist,” “sense,” or “team” language, the report stated, specifically because these systems aren’t designed to fully drive themselves.

Last year, California state regulators accused Tesla of falsely advertising its Autopilot and Full Self-Driving systems, alleging that Tesla misled consumers into believing the cars could drive themselves. In a filing, Tesla said that the state’s failure to object to the Autopilot branding for years constituted an implicit approval of the carmaker’s advertising strategy.

The NHTSA’s investigation also concluded that, compared to competitors’ products, Autopilot was resistant when drivers tried to steer their vehicles themselves—a design, the agency wrote in its summary of an almost two-year investigation into Autopilot, that discourages drivers from participating in the work of driving.

A New Autopilot Probe

These crashes occurred before Tesla recalled and updated its Autopilot software via an over-the-air update earlier this year. But along with closing this investigation, regulators have also opened a fresh probe into whether the Tesla updates, pushed in February, did enough to prevent drivers from misusing Autopilot, from misunderstanding when the feature was actually in use, or from using it in places where it is not designed to operate.

The review comes after a Washington state driver last week said his Tesla Model S was on Autopilot—while he was using his phone—when the vehicle struck and killed a motorcyclist.

Tesla Autopilot Was Uniquely Risky—and May Still Be

Source link