Real AV Safety Requires a Relevant NHTSA

On AV safety, what exactly is the National Highway Traffic Safety Administration (NHTSA) up to?

As the Trump administration was winding down last year, NHTSA issued its advance notice of proposed rulemaking (ANPRM), seeking public comment on the safety of self-driving cars. That comment period just ended, at the dawn of a new administration, on April 1st.

The agency’s mission, described in the ANRPM, is “the potential development of a framework of principles to govern the safe behavior of automated driving systems (ADS) in the future.” Translation: NHTSA is writing new rules to determine how vehicles should behave and perform when, in the future, they are no longer driven by humans.

In principle, once a vehicle takes charge of driving an autonomous vehicle (AV), the responsibilities of driving safety falls on the AV makers, who can no longer blame humans for errors on the road.

The implicit assumption is that the machine has become smarter (thus it can drive more safely) than the man.

Given the monumental changes that this assumption will bring to driving rules and traffic regulations, is NHTSA ready to manage its implications?

Observers remain skeptical. Many worry that NHTSA might over-regulate technologies and the potential of ADS technologies still in development. Others dread a regulatory AV framework with little teeth, ineffective at ensuring the acceptable safety for people inside and outside the vehicle.

By the time the NHTSA’s ANPRM deadline rolled in, more than 690 people and institutions had filed comments. Phil Koopman, co-founder of Edge Case Research was among them. ( Koopman is an autonomous vehicle safety expert and a lead author of UL 4600.

EE Times contacted him in hopes of finding out what exactly NHTSA is up to, what specific advice he offered NHTSA, and — most important — his analysis of what it might take for NHTSA to get AV safety right.

Why now?
“NHTSA is in a tough position,” observed Koopman. On one hand, you have a rapidly developing technology. On the other, it’s going to take a long time to create regulations.

In the tech and auto industries in the United States, anything that smacks of “regulation” is anathema. Even though AV companies are supposedly in the business of protecting people’s lives, they prefer, in the name of protecting their own IPs, to reveal very little of how they’re ensuring the safety of their self-driving cars.

Meanwhile, AV companies are already using public roads to test their vehicles.

A close look reveals that these AV developers feel little to no pressure from state or federal regulators to demonstrate that their testing vehicles are safe before launching their robocar trials in the real world. They don’t disclose what they are testing or how trials are proceeding. Some companies do issue annual safety reports, but usually in the form marketing materials with little technical detail. Their safety claims are mostly based on metrics such as miles driven and disengagement incidents.

Safety standards are here!
European nations such as Germany or France are accelerating work on their safety regulations for autonomous vehicles. Is the United States playing catchup?

Maybe so, Koopman said. But rather, he sees the real impetus behind regulators’ push for AV safety framework is the availability of industry standards.  “Look, 370 days ago, there were no specific autonomy safety standards [such as UL 4600].” The industry today already has a number of standards including ISO 26262, ISO 21448, ANSI/UL 4600, as well as safety-relevant security rules.

If there are standards, why wouldn’t NHTSA use them? The last thing Koopman wants to see is that regulators “start writing a bunch of ADS safety rules” from scratch. “NHTSA can build upon the industry’s own technical consensus rather than having to impose something they came up with themselves.”

Questions on conformance
In Koopman’s view, however, NHTSA’s adoption of the industry’s initiatives isn’t enough. He wants to see NHTSA asking industry players to conform to the safety standards they themselves developed.

In industries such as food, drugs or airlines, compliance is not optional. The FDA or FAA demands conformance to safety standards. Not so in the automotive industry.

Koopman offered sharp criticism to the autonomous vehicle developers in his ANPRM response:

It’s difficult to understand how the ADS industry, which justifies its need for regulatory breathing room by promising to make things safer, can at the same time fail to follow industry consensus safety standards for applicable aspects of their vehicles.

Transparency
It’s easy to demand that NHTSA require more transparency from the AV industry. But what exactly should NHTSA ask AV developers to be transparent about?

As a starter, they could release VSSAs.

Under NHTSA’s Automated Driving Systems 2.0 issued in Septebmer 2017, the agency asked AV makers to disclose “Voluntary Safety Self-Assessment (VSSA)” for their self-driving cars.

VSSA is a good idea. It’s the AV supplier’s first step in sharing with the world the assessment of their own robocars.

However, it turns out that VSSA submissions remain spotty. Worse, many VSSA documents lack any technical substance, Koopman observed. NHTSA should encourage every company putting a vehicle on public roads to release a VSSA and disclose the relevant safety case, Koopman noted.

Equally important is an explanation of how each metrics improves safety.

The Calif. Department of Motor Vehicles (DMV) requires AV suppliers to submit disengagement numbers on their road tests. Koopman has long insisted that disengagement is a faulty metric because it tends to subtly encourage test operators to minimize their interventions, which could be used to create an inaccurately rosy impression of vehicle safety, or worse could lead to unsafe testing.

In Koopman’s opinion, AV developers who are serious about building a safer AV should use disengagement data as a metric to improve the technology, not to tout victory in a safety contest. Every incident, mishap and near miss during testing should be recorded, because “it is critical to identify and fix the root cause of all safety problems beyond addressing any superficial symptoms,” Koopman has written.

If it is disengagement is not the right metric, what should NHTSA use?

Koopman suggests Safety Performance Indicators (SPI). He explained, “SPI has to do with whether your assumptions and claims about your safety case are really true.” Given that every AV company out there testing is using a different set of sensors and a different E/E architecture, while designing it for different applications and different Operational Design Domains (ODDs), no AV supplier’s safety argument is the same.

Koopman is fine with individual safety cases’ being all different.

When each safety case varies so much, then, what’s the point of forcing companies to comply to metrics that are made up to look like an apple-to-apple comparison? Artificially devised uniform metrics could only do more harm than good, because it would inevitably compel AV companies to game the system in order to win the “horse race.”

Speaking of the California DMV requirement for disengagement numbers, Koopman said, “Why is California measuring the horse race when they should be measuring the safety of pedestrians and other road users?”

Koopman maintains that each AV supplier should absolutely make their own safety case. State and federal regulators should be judging the quality of those claims, rather than making up their own metrics and impose them on AV vendors.

The flip side of Koopman’s argument is, “If these AV companies have fancy technology and they want permission to operate their vehicles on public roads, why not put the burden on them to explain why they are safe and what numbers will prove it?” He added, “If they don’t know what numbers prove their vehicles are safe, what are they doing on public roads?”

Driver’s Test for AVs?
One question that often impinges on NHTSA’s role in ensuring the safety of self-driving cars, is whether regulators should give AVs a driving test, similar to the challenge that humans must pass to earn a driver’s license.

It’s a popular idea that seems to make sense in the eyes of consumers.

Koopman noted, “A ‘driving test’ can help ensure a minimum level of competence before vehicles are let on the road. This is what today’s Federal Motor Vehicle Safety Standards (FMVSS) does.” He cautiously added, “This, however, does not address software-based system safety.”

Why?

“The pitfall is trying to claim that passing the test means the vehicle will have acceptably safe operation. This is not true of FMVSS, and it will not be true of any driving test,” he noted.

Koopman sees the driving test as a minimum requirement. More will need to be done to ensure acceptable safety. He is calling for a “positive trust balance” that includes “testing, good engineering, robust field feedback and a healthy safety culture.”

Put more bluntly, he wrote, “NHTSA should not spend massive resources attempting to define a comprehensive Automated Driving System’s ‘driver test.’”

Collaborate on safety, don’t compete on it
“Generally, more sharing on safety is better,” Koopman stated.

He sees NHTSA in a unique position to foster cooperation — similar to what the airline industry has been doing. “The industry-wide safety is achieved by involving cooperation among all stakeholders.”

The starting point is a “shared repository of potential hazards to be addressed when relevant to an ADS-equipped vehicle’s ODD.” Koopman describes this as “shared lessons learned,” an idea he often refers to as #didyouthinkofthat?

Asked to compare this “repository” to a “Safety Pool” jointly developed by the World Economic Forum and Deepen AI, Koopman said they are different. But he noted that many different ways to share information are all likely to have merit. “It’s important to include both operation safety (e.g. scenario databases) and software system safety (e.g. Functional Safety issues, Safety of the Intended Functionality – SOTIF – issues).”

Don’t forget human operators
Ultimately, self-driving cars would not involve humans for driving vehicles. However, driver/operator attention is an integral component of lower-level automation systems.

Koopman wants NHTSA to ensure that “the division of tasks between human operators and automated vehicles results in acceptable safety.”

In his NHTS APRM responses, Koopman stressed, “NHTSA should encourage the industry to develop standards for measuring driver engagement in the context of driver monitoring systems and their effectiveness in naturalistic driving situations.”

Particularly, Koopman referred to comments made by the National Transportation Safety Board (NTSB), stating that “NHTSA should address all outstanding NTSB recommendations, especially in the area of driver engagement.”

NHTSA needs software skills
NHTSA’s limited resources — both their budget and talents — are well known.  Koopman noted that NHTSA should significantly increase its staffing strength in computer-based system skills, especially in the area of software.

As vehicles turn increasingly into computers-on-wheels, “there is simply no way to understand whether a vehicle is acceptably safe without understanding computer technology.”

Koopman also pointed out, “Currently, NHTSA reports routinely do not rule in computer-based system defects — and especially software, when considering potential root causes of mishaps.”

He said that the handwriting is on the wall: “If NHTSA wants to remain relevant to actual safety outcomes, the agency must significantly develop more capabilities in the area of safety critical software.”

 

 

The post Real AV Safety Requires a Relevant NHTSA appeared first on EETimes.

Read More

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

U.S. Will Allow Dakota Access Oil Pipeline To Operate

Next Post

Globalfoundries CEO Caulfield Stands Up for the 70%

Related Posts
Close Bitnami banner
Bitnami