How secure are systems like Tesla’s autopilot? Batek Daki.

Every three months, Tesla publishes a safety report that shows the number of miles between accidents when drivers use the company’s driver assistance system, Autopilot, and the number of miles between accidents.

These figures always show less accidents with Autopilot, a collection of technologies that can drive, brake and accelerate Tesla vehicles on their own.

But the numbers are misleading. Autopilot is used to drive on the highway, and is usually as safe as driving on city streets, according to the Department of Transportation. There can be fewer accidents with autopilot because it is usually used in safer conditions.

Tesla has not provided any data that would allow the safety of Autopilot to be compared on the same types of roads. Not even other automakers that offer similar systems.

Autopilot has been on public roads since 2015. General Motors introduced Super Cruise in 2017, and Ford Motors launched BlueCruise last year. But publicly available data that reliably measure the security of these technologies are scarce. Whether or not American drivers use these systems or share the road with them are cavemen in an experiment that has yet to reveal results.

More and more features are being added by automakers and technology companies to improve safety, but it is difficult to verify these claims. Meanwhile, the death toll on the country’s highways and streets has risen in recent years, reaching a 16-year high in 2021. The added security provided by technological advances does not seem to offset the bad decisions made by drivers behind the wheel.

“There is a lack of data that will give the public confidence that these systems, as they spread, meet the expected safety benefits,” said J. Christian Gerdes, a professor of mechanical engineering and co-director of the Stanford University Center for Automotive Research. He was the first director of innovation in the Department of Transportation.

GM collaborated with the University of Michigan on a study examining the safety benefits of Super Cruise, but concluded that they did not have enough data to understand whether the system reduced crashes.

A year ago, the National Highway Traffic Safety Administration, the government’s car safety regulator, ordered companies to report serious accidents that could have advanced driver assistance systems along the lines of Autopilot, one day after they met them. The order stated that the agency would make the reports public, but has not yet done so.

The security agency declined to comment on the information gathered so far, but said in a statement that the data would be released “in the near future.”

Tesla and its CEO, Elon Musk, did not respond to requests for comment. GM said it had reported two incidents related to Super Cruise to the NHTSA: one in 2018 and one in 2020. Ford declined to comment.

Agency data is unlikely to provide a complete picture of the situation, but it could encourage lawmakers and drivers to take a closer look at these technologies and ultimately change the way they market and regulate them.

“To solve a problem, you have to understand it first,” said Bryant Walker Smith, an associate professor in law and engineering schools at the University of South Carolina, who specializes in developing transportation technologies. “This is a way to get more truth as a basis for research, regulations and other actions.”

Despite its capabilities, Autopilot does not relieve the driver of any responsibility. Tesla tells drivers to be alert and ready to take control of the car at all times. The same goes for BlueCruise and Super Cruise.

But many experts are concerned about these systems because they allow drivers to give up active control of the car, which can make them think that their cars are being driven by themselves. Then, when the technology malfunctions or the situation cannot be managed on its own, the driver may not be able to control it quickly enough.

Older technologies, such as automatic emergency braking and lane departure warning, have long provided safety nets to drivers by stopping or stopping a car or warning drivers when they leave their lane. But new driver support systems are overturning that arrangement, turning the driver into a technology safety net.

Safety experts are particularly concerned with Autopilot for the way it is marketed. Over the years, Mr. Musk said the company’s cars were on the verge of true autonomy, driving in almost any situation. The name of the system also refers to automation that technology has not yet achieved.

This can lead to driver compliance. Autopilot has played a role in many fatal accidents, as in some cases the driver was not ready to take control of the car.

Sir. Musk has long promoted Autopilot as a safety mode, and is backed by Tesla’s quarterly safety reports. But a recent study by the Virginia Department of Transportation Research, a branch of the Virginia Department of Transportation, shows that these reports are not what they seem.

“We know that cars that use Autopilot are crashing less often than when they don’t use Autopilot,” said Noah Goodall, a board researcher who studies safety and operation issues for autonomous vehicles. “But do they drive the same way, on the same roads, at the same time of day, the same drivers?”

Analyzing police and insurance data, the Insurance Institute for Highway Safety, a nonprofit research organization funded by the insurance industry, found that older technologies, such as automatic emergency braking and lane departure warning, have improved safety. But the organization says tests have not yet shown that driver support systems provide similar benefits.

Part of the problem is that police and insurance data do not always indicate whether these systems were used at the time of an accident.

The federal car safety agency has ordered companies to provide accident assistance data within 30 seconds of the use of driver assistance technologies. This could give a broader picture of how these systems work.

But even with this data, security experts say it will be more difficult to determine whether it is safer to use these systems than to shut them down in the same safe situations.

The Alliance for Automotive Innovation, a commercial group of auto companies, has warned that data from the federal safety agency could be misunderstood or misunderstood. Some independent experts express similar concerns.

“My major concern is that we will have accurate data on accidents involving these technologies, with no comparable data on conventional car accidents,” said Matthew Wansley, a professor at Cardozo Law School in New York who specializes in and specializes in developing automotive technologies. previously a general advisor for a launch of autonomous vehicles called nuTonomy. “These systems seem to be much more secure than they really are.”

For this and other reasons, automakers may be reluctant to share some data with the agency. Under his order, companies could ask him to keep certain data, saying he would reveal business secrets.

The agency is also collecting accident data on automated driving systems, with more advanced technologies aimed at completely removing drivers from cars. These systems are often referred to as “self-driving cars”.

For the most part, this technology is still being tested in a relatively small number of cars at the wheel as a driver’s backup. Waymo, the parent company of Google, Alphabet, has a driverless service in Phoenix neighborhoods, and similar services are planned in cities like San Francisco and Miami.

Businesses are already reporting on automated driving systems in some states. Data from the federal security agency, which will cover the entire country, should provide more information in this area as well.

But the immediate concern is the safety of Autopilot and other driver assistance systems, which are installed in hundreds of thousands of vehicles.

“The question is, is Autopilot increasing or decreasing its crash rate?” Sir. said Wansley. “We may not receive a complete response, but we will receive useful information.”

Leave a Comment