Self-Driving Cars Could 'Create Hell on Earth'

LAS VEGAS — Autonomous driving systems such as Tesla's Autopilot or Google's self-driving cars are far from being safe, security researcher Davi Ottenheimer said last week at the BSides Las Vegas hacker conference here, adding that letting such machines make life-or-death decisions might "create hell on earth."

Hands on the wheel! Credit: Vladyslav Storozhylov/ShutterstockHands on the wheel! Credit: Vladyslav Storozhylov/Shutterstock

The problem, Ottenheimer said, is that we put too much faith in the infallibility of computer and algorithms, especially when "machine learning" and massive amounts of data, just waiting to be processed, seem to promise a boundless future of rapid innovation. Instead, he said, it's time to slow down, admit that machines will take a long time to catch up, and let humans take the wheel.

"We believe that machines will be better than us," Ottenheimer said. "But they repeat the same mistakes we do, only faster."

MORE: Tesla Self-Driving Death Needs to Spur Changes

Ottenheimer said Tesla bears some responsibility for the death of Joshua Brown, the Ohio man who was killed May 7 when his Tesla Model S's auto-driving system apparently failed to see a tractor-trailer crossing the Florida highway on which Brown was traveling. But Ottenheimer adds that Brown was just as culpable by letting the car take over under the assumption that it would be a better driver than he was.

"Computers are increasingly bearing the burden of making our decisions," Ottenheimer said. "But Brown's death was a tragedy that didn't have to happen."

In the six months before he died, Brown had posted nearly two dozen video clips to YouTube showing the performance of Autopilot in various conditions, and he seemed to understand Autopilot and its limitations well.

Yet Ottenheimer played Brown's last clip, posted April 5, in which Brown's Tesla is nearly forced off a highway near Cleveland by a work truck drifting into his lane. It looks as if Brown wasn't being as attentive as he should have been, which Brown himself acknowledged in the text accompanying the clip.

Tesla Autopilot Near Miss

"I actually wasn't watching that direction and Tessy (the name of my car) was on duty with autopilot engaged," Brown wrote. "I became aware of the danger when Tessy alerted me with the 'immediately take over' warning chime and the car swerving to the right to avoid the side collision."

The big white work truck was forward and slightly to the left of Brown's Tesla, about 11 o'clock in military parlance. An attentive driver would have noticed the truck merging into an adjoining lane and anticipated that he or she would soon be in the truck's blind spot. Instead, Ottenheimer contends, Brown's reliance on Autopilot created a double blind spot, in which neither Brown nor the work truck's driver were aware of each other.

"Brown shifted the burden of safety to the car," Ottenheimer said. "He should have reacted much earlier, but he waited 20 seconds for the car to take over."

Tesla's Autopilot saved Brown then, but it failed a month later, when Brown had his fatal encounter with the tractor-trailer in Florida. Tesla argued in the wake of Brown's death that perhaps the car's camera didn't see the white trailer against a bright sky.

Ottenheimer had a slightly different theory, based on the truck driver's recollection that Brown's car changed lanes at the last moment to aim right for the center of the trailer. (Had the Tesla swerved left, it would have gone into a left-hand turn lane and perhaps been safer.)

"The Tesla may have thought there was separation between the front and rear wheels of the trailer," Ottenheimer said. "It may have thought there was an open lane."

Such mistakes are to be expected from machine algorithms, Ottenheimer said. They're simply not as smart as people.

As examples, he cited Google searches for "professional hair" that returned images of white people, while "unprofessional hair" returned images of mostly black people; facial-recognition programs that thought pro wrestlers in the process of losing seemed happy; and a robot security guard that ran over a toddler at a shopping mall in the heart of Silicon Valley in June because it couldn't predict the whereabouts of small children. (The toddler hurt his foot but was otherwise OK.)

"Was Joshua Brown a victim of innovation?" Ottenheimer asked. "Yes, and so was that toddler."

Ottenheimer said self-driving cars can barely handle driving on a freeway, the most basic and easiest kind of driving. They won't be able to deal with city traffic, or to navigate a crowded parking lot with vehicles and pedestrians coming from all directions.

To make self-driving cars safer, he suggested that Silicon Valley innovators step back, realize that they're practicing what Ottenheimer called "authoritomation" — the process of transferring authority to automation — and consider what that means.

"We're getting out of engineering, and moving into philosophical issues of responsibility and authority," Ottenheimer said. "Don't expect an easy help button."

Create a new thread in the Off-Topic / General Discussion forum about this subject
This thread is closed for comments
1 comment
    Your comment
  • kep55
    With all the secure, non-hackable, never hijacked software available for computers, why on earth would anyone thing self-driving cars could be dangerous?
    0