Did Tesla do enough to fix Autopilot flaw following recall? NHTSA opens probe
The National Highway Traffic Safety Administration is looking into whether Tesla did enough to fix issues with its Autopilot technology in response to a mass recall late last year after multiple crashes involving the assisted-driving system.
The federal agency is opening a query to “evaluate the adequacy” of Tesla’s December recall of more than 2 million electric vehicles, reportedly the largest in the company’s history, according to a filing disclosed Friday.
At the time, the Austin-based automaker said the recall would consist of an over-the-air software update applied to Tesla Model 3, Model S, Model X and Model Y vehicles manufactured in certain years, including some dating back to 2012.
The vehicles were also set to receive “additional controls and alerts” prompting drivers to pay attention when using Autopilot, including by keeping both hands on the steering wheel and watching the road, Tesla announced in December — the same day Virginia authorities revealed that Autopilot was in use when Pablo Teodoro III, 57, fatally crashed his Tesla into a tractor-trailer.
The NHTSA said Friday that it’s concerned as to whether Tesla’s remedy was sufficient, in part due to crashes that have happened since the company deployed the software update, Bloomberg earlier reported.
The agency’s latest investigation “will consider why these updates were not a part of the recall or otherwise determined to remedy a defect that poses an unreasonable safety risk,” according to the filing.
The renewed probe is a setback for Tesla chief Elon Musk, who has been the biggest public booster of the idea of the car doing the driving, first promising immediate “full self-driving” as early as 2014 — the same year the billionaire teased his company’s “new safety and autopilot hardware” built into every Model S.
On Tesla’s website, it Autopilot is touted for using “advanced sensor coverage” where eight cameras and “powerful vision processing provide 360 degrees of visibility and up to 250 meters (820 feet) of range.”
“Autopilot enables your car to steer, accelerate and brake automatically within its lane,” Tesla adds.
Even so, “current Autopilot features require active driver supervision and do not make the vehicle autonomous,” the company warns.
As Tesla continues to face challenges with Autopilot, Musk has set his sights on developing a driverless robotoxi called the CyberCab by August — a high-tech car he said five years ago would be ready by 2020.
During an earnings call this week, he downplayed the difficulty of getting the green-light from regulators for the vehicle, which will ultimately be part of a ride-sharing network like Uber, except without drivers, Bloomberg reported.
“I actually do not think that there will be significant regulatory barriers, provided there is conclusive data that the autonomous car is safer than a human-driven car,” Musk said.
Representatives for Tesla did not immediately respond to The Post’s request for comment.
Google-parent Alphabet already has a fleet of driverless Waymo robotaxis on the streets of Los Angeles, San Francisco and Phoenix, with plans to soon expand into Tesla’s hometown of Austin, Texas.
However, the expansion of autonomous vehicles hasn’t been without its hurdles.
General Motors’ Cruise, for instance, is under multiple federal probes after one of its robotaxis dragged a pedestrian who had been struck by another car.
And last year, safety concerns surrounding self-driving cars heightened when an online DMV report said a Waymo car “was engaged in autonomous mode” during a deadly accident involving a small dog “which did not survive.”
As a result, cars without drivers won’t find themselves in New York City anytime soon.
Mayor Eric Adams last month signed off on allowing several companies to deploy the autonomous vehicles — but required that human drivers be present.
Adams said the new autonomous vehicle program would be an example of “responsible innovation.”