4 ways humans will live and collaborate with robots
Image: Erin Carson/TechRepublic
Humans and robots have a precarious yet unavoidable relationship. This was one idea underpinning Minds and Machines, the opening topic of the Next: Economy event on Thursday in San Francisco. The set of talks covered automation and how it will or won’t affect various aspects of our lives.
Here are four takes on the relationship between humans and robots.
The first talk covered autonomous cars. Steven Levy from Backchannel talked with Sebastian Thrun, CEO and co-founder of Udacity. Thrun had come from Stanford University and was recruited to Google by Larry Page to work on self-driving cars. He said that when Page asked him about the idea of autonomous cars, he told the Google cofounder that it was impossible, but when Page asked why, and he began to consider it more, he realized that because it hadn’t been done before didn’t mean that it couldn’t be done.
Levy asked him about the potential economic consequences of these cars, like putting taxi drivers out of work. Thrun countered with the benefits of fewer traffic-related deaths and injuries, reduced gas usage, the ability for people to live further out.
Relating to the fatality situation, Thrun talked about machine learning. If a driver makes a mistake and learns, he or she is the only one who benefits from that learning.
“In robot land, once a car makes a mistake, no other car will make that mistake,” he said. And because machines will be able to learn faster than humans, that machine learning will extend into other fields.
Levy brought up the trust issue some have with machines. Thrun said it’s hard for people to admit if they trust machines or not because it’s not quite a binary. People already lean on machines a lot, whether it’s their phones, banks, or airplanes.
For the next section, Levy talked with Adam Cheyer from Viv (the company that build Siri), and Alexandre Lebrun from M (Facebook’s recently acquired personal assistant project).
Cheyer talked about the idea of a post-mobile world, where personal assistants are cloud-based, and humans can make rich queries, like saying “On the way to my brother’s house, need to pick up a good wine that goes with lasagna.” Viv would query websites and services for everything from the best way to get to the destination, to wine stores along the way, recipes for lasagna to determine wine pairings and more.
Viv is “another way to interact with the websites and web services of the world,” Cheyer said.
M will be a personal assistant built into Facebook Messenger that can handle things like ordering flowers, making reservations, even dealing with the cable company.
Levy asked about if this would put people out of jobs. Lebrun said M will need trainers so the program can learn more and more domains, so it could create jobs in that sense.
Levy then brought out John Markoff from the New York Times, and Jerry Kaplan from Stanford who wrote Humans Need Not Apply.
Markoff talked about early days at Stanford where he saw two camps on artificial intelligence—those who were working on totally autonomous systems, and those who were using AI as a way to augment humans. The two camps largely don’t talk. He said he’s looked to the designers on both sides.
“They have the choice to basically design people into the future or not,” he said.
In discussing items like robot bosses and those relationships, Levy also brought up the 2013 movie Her, in which a man falls in love with an operating system.
Cheyer said that his impulse while watching was to try and reverse engineer everything the OS would say, but there was a moment where what “she” said would have come out of pure emotion. He gave up.
“A lot of the emotional components are still a long way out,” Cheyer said.
The panel talked about the human tendency to project emotion onto things and humanize them. Markoff said the Turing Test is really a test of human gullibility.
“The bar is incredible low. We’ll anthropomorphize anything,” he said.
From there, the panel talked about the now-infamous Oxford study that said 47% of US jobs are vulnerable to automation.
Kaplan and Markoff said there’s nuance from human interaction involved in many jobs.
“No one wants to go to an undertaker who goes ‘I am so sorry for your loss,'” Kaplan said imitating a robot voice. And what’s more, the issue is not automating jobs to replace people, but automating tasks that humans don’t really need to be doing.
The tension between humans and robots used to be simple to understand, said Narrative Science’s Kristian Hammond—the robots were going to kill us. Now it’s worse—they’re going to take our jobs.
Hammond talked about Narrative Science and the hubbub it caused when its tool Quill, was first able to take data, like a quarterly earnings report, and generate an article for a publication like Forbes. It was going to destroy journalism.
“We can take what they do in terms of looking at the data, figuring out what to say, and turning it into language,” he said.
Instead what evolved wasn’t a legion of robot writers, but analysts. Hammond basically said that there are not enough data scientists to analyze the amount of data that’s being generated.
Plus, in an age of personalization, it could make more sense for something like Quill to look at the credit card statements for a small business, and make suggestions on spending habits and the like.
Other uses could be financial portfolio commentary, or sensor-driven reporting for IoT.
“Data science was the sexiest job of the 21st century, now it’s the next job we’re going to automate,” he said.
See the original post –