AI Helps Humans Best When Humans Help the AI
I signed up for Clara last week.
Clara’s creators bill it as “the first intelligent, natural language interface that feels human—a virtual employee you can depend on.” Starting today, after months of private testing, it’s available to the world at large. I first noticed it about a year ago, when Techcrunch, the popular tech tabloid, said the company behind the tool, Clara Labs, had landed some funding from big-name venture capital firm Sequoia Capital.
I emailed the young Silicon Valley startup, but founder Maran Nelson told me the company wasn’t ready for press. She got back in touch last week, inviting me to sign up. I did, and it’s an intriguing thing—in part because it demonstrates the growing pains of modern AI. You see those same pains with, say, Apple’s Siri or the digital assistant Facebook is testing with a small number of people in the San Francisco Bay Area. AI has come a long way, but it still needs human help.
As it stands today, Clara helps coordinate meetings—via email—and generally manages your online calendar. When you’re trying to set up a phone meeting with someone, you cc: Clara, and the tool arranges a time that works for everyone and mails calendar invites. You also can ask it to add a meeting to your calendar, something I did just minutes before writing this sentence. Diede van Lamoen, who juggles myriad phone meetings each week, chatting with people across the globe, has used the tool for a year, and he says it saves him enormous amounts of time. “It’s been a godsend,” he says. “I can outsource all the scheduling.”
For me, it worked well enough, though I’m not sure how much time it saves. And it occasionally screws up. The same can be said for any digital assistant, and, well, humans screw up too. For a while there, Clara wasn’t getting my time zone right, and apparently, that was my fault. I’d signed up with the wrong time zone. With Clara, the larger point is this digital assistant doesn’t just serve humans. It’s driven by humans, at least in part—humans who take the “virtual” out of “virtual assistant.”
‘Why Did This Person…?’
The company website says Clara is “powered by machine intelligence and trained by executive assistants.” Nelson says humans (independent contractors working in locations around the world) don’t just train the system but, in certain situations, take control of it. When you send Clara an email, Nelson says, the system analyzes it in an effort to generate a response, and it then sends this analysis to a human for approval. And if a task falls outside of what the machine can do, the human will take over. A contractor will, say, respond to you with a long involved email.
At one point, when Nelson and I attempted to arrange a meeting, Clara offered a time that didn’t make complete sense, and later, Nelson inadvertently sent me an email thread in which she had worked to solve the problem. This showed the human involvement.
“Why did this person offer times that are very obviously not tomorrow?” she had written to a support technician. “That is not cool.”
“This was cra-60,” the technician wrote, giving the contractors online handle before identifying him name and providing the name of his “coach.” “Clara error—oversight of customer instruction. Reported in mistakes widget now.”
All of which is fine. But it shows the difficulty involved in building a digital assistant—and the human element often required. Siri isn’t driven by humans. But it screws up constantly in ways a human never would. Clara is driven by humans, and it screws up in ways that humans screw up. Neither humans nor AI is perfect. At least not yet. “AI is much more incremental than what you see in the the science fiction the past,” Nelson says.
The question is how to get to that nirvana where a machine can do everything and do it well. Philosophies differ. X.ai, a startup based in New York, offers a digital assistant a lot like Clara. It schedules meetings. The difference, according to CEO Dennis Mortensen, is that it now operates solely on artificial intelligence. Mortensen believes going whole hog is the best option. If you lean on humans, the AI is slower to reach its potential.
“This is similar to the approach with self-driving cars,” he says. “In our system, humans don’t need to participate at all. This increases the quality of the data you gather, the data you need to train the system.” In other words, the system can’t really learn what it needs to do unless it tries on its own. If you don’t give it a crutch, you can better determine how it needs to improve.
The Human Requirement
That said, in building any artificial intelligence, humans are required on some level (for now). X.ai started with some human training. Google, Facebook, and Microsoft use “deep learning” neural networks—networks of machines that mimic the human brain—to identify photos, recognize speech, and translate from one language to another, and they work remarkably well. But in the beginning, they too require at least a modicum of human training. If you want a machine to teach itself to recognize cat photos, humans must first show it what cat photos look like—that is, tag a bunch of photos and feed them into the neural network. With the new M digital assistant, Facebook is trying to take this process to an entirely new level.
When you message a request to M—asking it to do anything from telling a joke to planning a vacation—an AI system formulates a response. But then, much like with Clara, it sends this response to a human, who will approve the response, modify it, or augment it by completing a task a machine couldn’t possibly complete. The plan, however, is not just to better train the current system, which relies on relatively simple AI techniques, but to meticulously gather data on everything the humans do: what websites they visit, what calls they make, what they say on those calls. In the years to come, the company can feed this data into neural networks, so machines can teach themselves to perform these same tasks.
This is a long way from reality. But there are signs it can work. Google AI engineers recently built a chatbot that analyzed a bunch of old movie dialogue and taught itself to carry on a pretty decent conversation about life, the universe, and other stuff.
Where does Clara fit into all this? From the outside looking in, it appears to lean heavily on humans. But Nelson says this will change as time goes on. “One of the strengths of Clara is a very strong feedback loop between the system and the humans,” she says.
That’s one way to solve the problem. Several other services, such as Magic and Operator, are built solely around humans—no AI—and as a result, they can tackle a broader range of tasks. In some ways, humans work better than AI.
In other ways, they don’t. Driving a large online service with human operators is a logistically difficult task that may get more difficult as the service expands. Eventually, we want to reach a point where the machine assistants work so well, humans aren’t needed. But that’s a long way away.
Link to original: