Pull Over, Robot! Self-Driving Cars Should Be Off Roads

Self-driving cars — two-ton robots moving at high speed — are not ready for the road, and won't be for many years. Technology companies such as Google and, especially, Tesla are moving far too fast toward granting robot cars total autonomy, because they're not used to software and sensor problems leading to fatal accidents.

Credit: Mopic/Shutterstock

(Image credit: Mopic/Shutterstock)

The U.S. government yesterday (Sept. 19) made a half-step toward regulating self-driving cars. Most media coverage spun that as an endorsement of the technology, but there's an alternative view: The National Highway Traffic Safety Administration (NHTSA) has accepted autonomous vehicles as inevitable, and is jumping in before more people get killed.

Legacy automakers may not be as quick as Tesla to issue security patches for their computerized cars, but at least they deeply understand the risks of taking control away from human drivers, and aren't being so arrogant as to beta-test autonomous vehicles on public roadways. We should look to Detroit and Washington for leadership in this field, not Silicon Valley.

MORE: Self-Driving Cars Could Create 'Hell on Earth'

"Advanced automated vehicle safety technologies, including fully self-driving cars, may prove to be the greatest personal transportation revolution since the popularization of the personal automobile nearly a century ago," the NHTSA said in the introduction to its Federal Automated Vehicles Policy paper yesterday. "Automated driving innovations could dramatically decrease the number of crashes tied to human choices and behavior."

The policy paper lays out voluntary guidelines, not mandatory regulations, and does sound more like an endorsement than a warning. But in the fine print, the message is clear that regulations will come — and what those future regulations will look like depends on how well the makers of self-driving cars follow today's guidelines.

The paper asks that all companies involved in self-driving cars, from software coders to car builders to sensor makers to taxi-fleet operators, submit safety assessments covering 15 different criteria to the NHTSA four months before road testing begins. Those criteria range from vehicle cybersecurity to post-crash behavior to fallback mechanisms for when self-driving systems fail.

Compliance with these guidelines will be easier for Tesla, Google and General Motors than it will be for solo tinkerers like George Hotz, the famous iPhone and PlayStation hacker who is building a self-driving car in his garage. But it will force even the big companies to slow down their aggressive testing of robot cars on public roadways.

Far from safe

That's a good thing, because right now that on-street beta testing is leading to accidents. The only proven fatal accident involved Joshua Brown, the Ohio man killed in May when his Tesla Model S plowed into the side of a tractor-trailer as the car was speeding on Autopilot.

Tesla has publicly stated that Autopilot isn't meant to be a truly autonomous system that would permit the driver to completely relinquish control. Yet the system's name implies exactly that. Brown, who was reportedly watching a movie when his car hit the tractor-trailer, may have taken it a bit too literally.

"By marketing their feature as 'Autopilot,' Tesla gives consumers a false sense of security," Consumer Reports executive Laura MacCleery said following Brown's death. "We're deeply concerned that consumers are being sold a pile of promises about unproven technology."

That technology can be fooled, as Brown's own death proved. His car's cameras apparently didn't "see" the white trailer in front of the car as an obstacle, possibly because the trailer was hard to distinguish against a bright sky.

His car's radar would have picked it up, but the latest version of Autopilot at the time was configured to disregard obstacles detected by radar unless they could be confirmed by cameras. (The next version of Autopilot will give radar equal authority.)

It's not just Tesla's sensors that are fallible. At the DEF CON 24 hacking conference in Las Vegas in August, three Chinese researchers showed how easy it was to make fake obstacles appear, and real ones disappear, from the navigation systems of Tesla, Audi, Volkswagen and Ford vehicles. Most of these scenarios involve assisted rather than autonomous driving, and the humans behind the wheel would often be able to stop in time. Autonomous vehicles might not have that fallback option.

One of the biggest makers of vehicle camera systems is an Israeli firm called Mobileye, which supplies numerous car makers. But in May, following Brown's crash, Mobileye and Tesla had a falling out. This month, Mobileye's chairman and chief technology officer told Reuters that Tesla was "pushing the envelope" in terms of vehicle safety.

The human advantage

Let's look at Tesla's and Google's claims that autonomous vehicles are safer than regular cars because there's no factor of human error. That may be true on an empty road with unmoving obstacles. But self-driving cars have to share the road with human drivers, and human drivers seem to hit self-driving cars twice as often as regular vehicles, according to a University of Michigan study in October 2015.

An autonomous vehicle in heavy traffic would be a cybernetic grandma.

That may be because self-driving cars are too cautious, too observant of the law, and too slow to adapt to rapidly changing circumstances. A Google autonomous vehicle was famously rear-ended by a human driver in Mountain View, California — because the Google car braked too quickly at a stop sign. (The crash was at a whopping 4 mph.)

MORE: Connected Cars: A Guide to New Vehicle Technology

You might imagine that caution, lawfulness and moderate speed are good things. But they're not. No one drives 55 on the freeway, and if they do, they'd better be in the slow lane.

An autonomous vehicle in heavy, but steady, freeway traffic would be a cybernetic grandma, stuck in the fast lane doing the speed limit and too scared to change lanes even as angry drivers behind it pressed closely to its bumper. Updated software could mitigate that behavior, but you'd have to program the robot car to regularly break the law — and that's something no corporation wants to be caught doing.

And that rosy scenario is in freeway traffic during clear daylight, possibly the most predictable form of traffic there is. Regular driving involves having to react instantly to darkness, heavy rain, snow, kids following balls out into streets, cyclists and things suddenly falling off trucks.

"There's nothing that's even remotely approaching the ability to do that," Steve Shladover, director of the University of California's Partners for Advanced Transportation Technology (PATH) program, told the CBC in a May 2015 article. "Even the most sophisticated of those test vehicles is far inferior to a novice driver."

I just stepped outside to get lunch in midtown Manhattan and watched taxis abruptly change lanes, bicycle deliverymen weave in and out of traffic, and pedestrians stand on the street (not the sidewalk) at corners — and then race across the street before the next car comes.

New York City drivers know how to drive in such chaotic situations. It'll be a long time before a robot car programmed in suburban California can do that.

Paul Wagenseil

Paul Wagenseil is a senior editor at Tom's Guide focused on security and privacy. He has also been a dishwasher, fry cook, long-haul driver, code monkey and video editor. He's been rooting around in the information-security space for more than 15 years at FoxNews.com, SecurityNewsDaily, TechNewsDaily and Tom's Guide, has presented talks at the ShmooCon, DerbyCon and BSides Las Vegas hacker conferences, shown up in random TV news spots and even moderated a panel discussion at the CEDIA home-technology conference. You can follow his rants on Twitter at @snd_wagenseil.