The Case for Making Self-Driving Cars Think Like Humans
Car crashes kill more than a million people each year, and roughly 90 percent of them are the result of human error. That’s the strongest argument for developing self-driving cars: Humans are lousy drivers. Coolly logical robots, the thinking goes, will far exceed us.
The irony of that is the best way to program those robots to drive may be to program them to drive more like humans. The UK is studying this very idea as part of “Move-UK,” an $8 million, three-year project intended to bring self-driving cars to market, on the double.
The researchers see three potential advantages to making robo-cars ape human habits. A robot that behaves more like a human driver could provide a sense of familiarity with the technology and reassure passengers who might be reluctant to surrender control. It could blend into and interact with traffic. And it might be more suited to handling situations where human intuition can be more useful than fixed algorithms. “There are other ingredients than pure technology,” says Dr. Wolfgang Epple, head of R&D at Jaguar Land Rover.
Granted, the non-human qualities of autonomous cars are what will make them so useful. They can’t get distracted, or drunk, or fall asleep at the wheel. They can make decisions in emergency situations in near real-time, without being clouded by emotion or confusion. But if they’re going to share the road with human drivers—and they will—they would do well to imitate some human habits.
That’s partly to help convince humans to give up the wheel. Epple points out how aggravating it can be to sit in a car with someone who drives too slowly, or too aggressively, for your taste. If your car’s autonomous mode makes it behave like a hyper-cautious teenager during a road test, you’ll probably never hand over control. And that means missing a big opportunity to save lives.
Acting like a human should also help autonomous cars blend in. “You want autonomous cars to not behave robotically, in a mechanical way that is different from the way that other human drivers would react,” says John Dolan, who studies autonomous technology at Carnegie Mellon’s Robotics Institute. Google has seen the truth in this, programming its car to edge forward at four-way stops as a human would, to signal that it wants to proceed. Google’s cars also go with the flow on the highway—even if that means speeding—because its engineers believe matching the speed of traffic trumps following the letter of the law. In other words, to work within a world of human drivers, self-driving cars have to go just a bit native.
The last benefit of this research is the hardest to pin down. Jaguar Land Rover says data from the study “will reveal the natural driving behaviors and decision-making” of humans in situations like roundabouts, crossing intersections, and dealing with an emergency vehicle approaching from behind. Dolan’s not so sure about that last example—the car should automatically get out of the way, as long as it’s safe—but says there is real value in studying human habits. When you merge onto a busy highway, for example, you silently telegraph your intentions to other drivers by accelerating or slowing down. “There’s the question of how do you develop that algorithm,” Dolan says. “You can learn things from observing what people do.”
It’s likely that someday, the era of human driving will end, and our descendents will chuckle at the idea of us actually using a steering wheel and pedals. But that’s a long ways off, and before the coming robo-cars fully take over, they’ll have to learn at least some of our ways—if only for our sake.
See original article here: