Most people have probably never even seen a self-driving car, but that could soon change.
A House subcommittee voted late Wednesday to allow up to 100,000 self-driving automobiles onto American roadways.
These new robo-cars won’t have to meet existing safety standards for manned automobiles, but manufacturers will have to petition the National Highway Safety and Transportation Bureau, a federal agency tasked with reducing vehicle-related crashes, for an exemption, explained Bryant Walker Smith, a law professor and self-driving car expert at Stanford University. That means that automakers will need to make a clear case that their self-driving technology is safe enough to drive alongside cars with humans at the wheel.
If passed, the bill would permit autonomous cars to drive on U.S. roads before we have established benchmarks for what it means for self-driving technology to be designed safely. Instead of, say, regulators creating some baseline rules for safe self-driving cars, this legislation proposes that automakers self-certify that their autonomous cars are OK to drive on public roads.
The legislation would also bar states from passing rules to regulate self-driving cars, ostensibly to prevent a patchwork of legislation across the country. States can continue to make licensing, registration, and maintenance requirements for self-driving cars, though, which leaves them some room to control how the technology is deployed within their borders.
Ryan Calo, a law professor who specializes in technology policy at University of Washington, is concerned about how this legislation could play out. He thinks that regulatory agencies don’t necessarily have the expertise in robotics and artificial intelligence to determine whether an automaker’s self-driving car exemption will not be dangerous when the rubber hits the road. “This is an area where it’s especially important to make sure the technology is safe before it gets deployed,” says Calo.
If passed, automakers could ask for their self-driving car to not include a brake pedal, for example, because the vehicle will brake with software or a button, rather than the typical pedal currently required.
According to a statement from Rep. Debbie Dingell, a Democrat from Michigan who voted to pass the bill, she was motivated to back the proposal because human drivers kill a lot of people. More than 35,000 people died on American roadways in 2015, up nearly 8 percent from 2014, according to federal data. In 2016, traffic deaths rose another 6 percent. In fact, the past two years represent the highest uptick in automobile related deaths in more than half a century.
Automakers gunning to bring new self-driving tech to market regularly contend that if the human element is taken out of the equation, thousands of lives could be saved. After all, robots can’t drive drunk, text while driving, or do any of the other idiotic things that humans get up to while behind the wheel. Some 90 percent of vehicle crashes can be traced to human error, says Walker Smith.
But there’s actually no data to back the claim that self-driving cars will lead to fewer vehicle related deaths—after all, autonomous cars have yet to be deployed at any meaningful scale. And when the high-tech cars do hit the streets, things can go awry.
Take what happened in San Francisco earlier this year, when one of Uber’s self-driving cars ran a red light the very first day it was on the road. Then there was the 2016 incident in Florida, when a person behind the wheel of their Tesla in Autopilot driving mode died after crashing into a tractor-trailer and ignoring the car’s multiple warnings to take the wheel.
If this particular legislation doesn’t pass, some other proposal to open roadways to more self-driving cars probably will soon. Even if the technology is ultimately safer than manned cars, when the rubber hits the road, it could get messy.