If you follow the ongoing creation of self-driving cars, then you probably know about the classic thought experiment called the Trolley Problem. A trolley is barreling toward five people tied to the tracks ahead. You can switch the trolly to another track—where only one person is tied down. What do you do? Or, more to the point, what does a self-driving car do?

Even the people building the cars aren’t sure. In fact, this conundrum is far more complex than even the pundits realize.

Now, more than ever, machines can learn on their own. They’ve learned to recognize faces in photos and the words people speak. They’ve learned to choose links for Google’s search engine. They’ve learned to play games that even artificial intelligence researchers thought they couldn’t crack. In some cases, as these machines learn, they’re exceeding the talents of humans. And now, they’re learning to drive.

So many companies and researchers are moving towards autonomous vehicles that will make decisions using deep neural networks and other forms of machine learning. These cars will learn—to identify objects, recognize situations, and respond—by analyzing vast amounts of data, including what other cars have experienced in the past.

So the question is, who solves the Trolley Problem? If engineers set the rules, they’re making ethical decisions for drivers. But if a car learns on its own, it becomes its own ethical agent. It decides who to kill.

“I believe that the trajectory that we’re on is for the technology to implicitly make the decisions. And I’m not sure that’s the best thing,” says Oren Etzioni, a computer scientist at the University of Washington and the CEO of the Allen Institute for Artificial Intelligence. “We don’t want technology to play God.” But nobody wants engineers to play God, either.

If Machines Decide

A self-learning system is quite different from a programmed system. AlphaGo, the Google AI that beat a grandmaster at Go, one of the most complex games ever created by humans, learned to play the game largely on its own, after analyzing tens of millions of moves from human players and playing countless games against itself.

In fact, AlphaGo learned so well that the researchers who built it—many of them accomplished Go players—couldn’t always follow the logic of its play. In many ways, this is an exhilarating phenomenon. In exceeding human talent, AlphaGo also had a way of pushing human talent to new heights. But when you bring a system like AlphaGo outside the confines of a game and put it into the real world—say, inside a car—this also means it’s ethically separated from humans. Even the most advanced AI doesn’t come equipped with a conscience. Self-learning cars won’t see the moral dimension of these moral dilemmas. They’ll just see a need to act. “We need to figure out a way to solve that,” Etzioni says. “We haven’t yet.”

Yes, the people who design these vehicles could coax them to respond in certain ways by controlling the data they learn from. But pushing an ethical sensibility into a self-driving car’s AI is a tricky thing. Nobody completely understands how neural networks work, which means people can’t always push them in a precise direction. But perhaps more importantly, even if people could push them towards a conscience, what conscience would those programmers choose?

“With Go or chess or Space Invaders, the goal is to win, and we know what winning looks like,” says Lin. “But in ethical decision-making, there is no clear goal. That’s the whole trick. Is the goal to save as many lives as possible? Is the goal to not have the responsibility for killing? There is a conflict in the first principles.”

If Engineers Decide

To get around the fraught ambiguity of machines making ethical decisions, engineers could certainly hard-code the rules. When big moral dilemmas come up—or even small ones—the self-driving car would just shift to doing exactly what the software says. But then the ethics would lie in the hands of the engineers who wrote the software.

It might seem like that’d be the same thing as when a human driver makes a decision on the road. But it isn’t. Human drivers operate on instinct. They’re not making calculated moral decisions. They respond as best as they can. And society has pretty much accepted that (manslaughter charges for car crashes notwithstanding).

But if the moral philosophies are pre-programmed by people at Google, that’s another matter. The programmers would have to think about the ethics ahead of time. “One has forethought—and is a deliberate decision. The other is not,” says Patrick Lin, a philosopher at Cal Poly San Luis Obispo and a legal scholar at Stanford University. “Even if a machine makes the exact same decision as a human being, I think we’ll see a legal challenge.”

Plus, the whole point of the Trolley Problem is that it’s really, really hard to answer. If you’re a Utilitarian, you save the five people at the expense of the one. But as the boy who has just been run over by the train explains in Tom Stoppard’s Darkside—a radio play that explores the Trolley Problem, moral philosophy, and the music of Pink Floyd—the answer isn’t so obvious. “Being a person is respect,” the boy says, pointing out that the philosopher Immanuel Kant wouldn’t have switched the train to the second track. “Humanness is not like something there can be different amounts of. It’s maxed out from the start. Total respect. Every time.” Five lives don’t outweigh one.

On Track to an Answer?

Self-driving cars will make the roads safer. They will make fewer errors than humans. That might present a way forward—if people see that cars are better at driving than people, maybe people will start to trust the cars’ ethics. “If the machine is better than humans at avoiding bad things, they will accept it,” says Yann LeCun, head of AI research at Facebook, “regardless of whether there are special corner cases.” A “corner case” would be an outlier problem—like the one with the trolley.

But drivers probably aren’t going to buy a car that will sacrifice the driver in the name of public safety. “No one wants a car that looks after the greater good,” Lin says. “They want a car that looks after them.”

The only certainty, says Lin, is that the companies making these machines are taking a huge risk. “They’re replacing the human and all the human mistakes a human driver can make, and they’re absorbing this huge range of responsibility.”

What does Google, the company that built the Go-playing AI and is farthest along with self-driving cars, think of all this? Company representatives declined to say. In fact, such companies fear they may run into trouble even if the world realizes they’re even considering these big moral issues. And if they aren’t considering the problems, they’re going to be even tougher to solve.

Link – 

Self-Driving Cars Will Teach Themselves to Save Lives—But Also Take Them