Alphabet’s Waymo Says Its Tech Would Avoid Fatal Human Crashes

By Ed Ludlow | March 9, 2021

The autonomous-car artificial intelligence from Alphabet Inc.’s Waymo avoided or mitigated crashes in most of a set of virtually recreated fatal accidents, according to a white paper the company published Monday.

The simulations were based on 72 fatal crashes that occurred between 2008 and 2017 in Chandler, Arizona, where Waymo currently operates a small-scale autonomous ride-hail service based on its “Driver” sensors and software. That included 20 incidents involving a pedestrian or cyclist.

“We believe we have an opportunity to improve road safety by replacing the human driver with the Waymo Driver,” Trent Victor, Waymo’s director of safety research and best practices, said in a blog post. “This study helps validate that belief.”

The Driver system failed to avoid or mitigate simulated accidents only when the autonomous car was struck from behind, according to the study. While the paper isn’t an independent assessment, this is the first time an autonomous startup has shared data showing how its system might perform in real-world fatal crashes, Waymo said in a blog post.

Waymo says it published the study for the benefit of the public, rather than regulators specifically. However, the company said in October it wanted to revive discussions around shared industry safety standards and legislative support for self-driving technology. The National Highway Traffic Safety Administration has also recognized simulation as a key tool in developing autonomous technology.

The Google parent’s self-driving unit is one of a number of well-funded companies racing to commercialize autonomous vehicle technology and pitching safety as a key benefit. General Motors Co.’s Cruise and Amazon.com Inc.’s Zoox are also working on robotaxi fleets using their own proprietary autonomous cars and conducting hundreds of thousands of miles of testing on public roads each year. But Waymo is seen as a front-runner, in part because of the small pilot services it already operates in Arizona.

Waymo conceded in the white paper that simulating human-induced collisions isn’t inherently proof that autonomous vehicles are ready to handle all of the things that could possibly cause an accident. In particular, the paper cited the potential for human drivers to misinterpret the actions of an autonomous car or react differently in a potential crash situation with an AV than in one with a human-driven car.

The simulations tested Waymo’s technology as both the initiator of an accident and, where two cars had been involved, the “responder,” or car that was hit.

In truth, the technology didn’t do much out of the ordinary. In 52 situations where Waymo’s technology was put in the role of the original initiator of the crash, it avoided a collision entirely by simply following the rules of the road: yielding appropriately to traffic, observing traffic signals and obeying the speed limit.

In one case, the human in the real-world crash had run a red light. In the simulation, the Waymo Driver didn’t.

“Transparency is critical to foster trust with the public in light of a few cases where capabilities were exaggerated,” BloombergNEF analyst Alejandro Zamorano-Cadavid said. “These hindsight tests are a good piece for evaluating the Waymo Driver, and it will be good to see other companies publish results on how their systems performed on the same situations.”

About the photo: The hubcap of a Jaguar I-Pace with Waymo autonomous electric vehicle (EV) is seen during an event in New York, U.S., on Tuesday, March 27, 2018. Waymo is teaming up with Jaguar Land Rover on autonomous vehicles, its second major automaker partnership and a big boost for the nascent technology that has come under scrutiny recently. Photographer: Mark Kauzlarich/Bloomberg

Was this article valuable?

Here are more articles you may enjoy.