Facebook Open Sources Its AI Hardware as It Races Google
In Silicon Valley, the new currency is artificial intelligence.
Over the last few years, a technology called deep learning has proven so adept at identifying images, recognizing spoken words, and translating from one language to another, the titans of Silicon Valley are eager to push the state of the art even further—and push it quickly. The two biggest players are, yes, Google and Facebook.
At Google, this tech not only helps the company recognize the commands you bark into your Android phone and instantly translate foreign street signs when you turn your phone their way. It helps drive the Google search engine, the centerpiece of the company’s online empire. At Facebook, it helps identify faces in photos, choose content for your News Feed, and even deliver flowers ordered through M, the company’s experimental personal assistant. All the while, these two titans hope to refine deep learning so that it can carry on real conversations—and perhaps even exhibit something close to common sense.
Of course, in order to reach such lofty goals, these companies need some serious engineering talent. And the community of researchers who excel at deep learning is relatively small. As a result, Google and Facebook are part of an industry-wide battle for top engineers.
The irony is that, in an effort to win this battle, the two companies are giving away their secrets. Yes, giving them away. Last month, Google open sourced the software engine that drives its deep learning services, freely sharing it with the world at large. And this morning, Facebook announced that it will open source the designs for the computer server it built to run the latest in AI algorithms. Code-named Big Sur, this is a machine packed with an enormous number of graphics processing units, or GPUs—chips particularly well suited to deep learning.
It may seem odd that these companies are giving away their technology. But they believe this will accelerate their work and foster new breakthroughs. If they open source their hardware and software tools, a larger community of companies and researchers can help improve them. “There is a network effect. The platform becomes better as more people use it,” says Yann LeCun, a founding father of deep learning, who now oversees AI work at Facebook. “The more people that rally to a particular platform or standard, the better it becomes—the more people contribute.”
Plus, Facebook can curry favor across the community, providing added leverage in recruiting and retaining talent. “Our commitment to open source is something that individuals who work here are passionate about,” says Serkan Piantino, an engineering director in Facebook’s AI group. “Having that be a part of our culture is a benefit when it comes to hiring.”
An Open Source World
This is how the modern tech world works. The Internet’s largest services typically run on open source software. “Open source is the currency of developers now,” says Sean Stephens, the CEO of a software company called Perfect. “It’s how they share their thoughts and ideas. In the closed source world, developers don’t have a lot of room to move.” And as these services shift to a new breed of streamlined hardware better suited to running enormous operations, many companies are sharing their hardware designs as well.
Facebook is the poster child for this movement. In 2011, after years of sharing important software, the company started sharing hardware designs, seeding what it calls the Open Compute Project—a way for any company to share and collaborate on hardware.
As it grew into the Internet’s most dominant force, Google typically saw its most important software and hardware designs as a competitive advantage it must keep to itself. But it too has opened up in recent years. Releasing its TensorFlow deep learning engine took the approach to a new peak. Now, just weeks later, Facebook has open sourced its AI hardware.
Rise of the GPU
Big Sur includes eight GPU boards, each loaded with dozens of chips while consuming only about 300 Watts of power. Although GPUs were originally designed to render images for computer games and other highly graphical applications, they’ve proven remarkably adept at deep learning. Deep learning relies on neural networks, vast networks of machines that approximate the web of neurons in the human brain. Traditional processors help drive these machines, but big companies like Facebook and Google and Baidu have found that their neural networks are far more efficient if they shift much of the computation onto GPUs.
Neural nets thrive on data. Feed them enough photos of your mother, and they can learn to recognize her face. Give them enough spoken words, and they can learn to recognize what you say. With GPUs, these neural nets analyze more data, more quickly. The general principle, says Baidu researcher Bryan Catanzaro, is that GPUs give more computational throughput per dollar than traditional CPUs.
After 18 months of development, Big Sur is twice as fast as the previous system Facebook used to train its neural networks. That means it can train twice as many neural networks in the same amount of time—or train networks that are twice as large. In short, Facebook can achieve a greater level of AI at a quicker pace. “The bigger you make the neural nets, the better they will work,” LeCun says. “The more data you get them, the better they will work.” And since deep neural nets serve such a wide variety of applications—from face recognition to natural language understanding—this single system design can significantly advance the progress of Facebook as a whole.
Facebook designed the machine in tandem with Quanta, a Taiwanese manufacturer, and nVidia, a chip maker specializing in GPUs. Traditionally, businesses went straight to the likes of Dell, HP, and IBM for the servers that drove their online services. But Facebook—like Google, Amazon, and others—has found that it can save enormous amounts of money by designing systems in tandem with Asian manufacturers such as Quanta.
Facebook says it’s now working with Quanta to open source the design and share it through the Open Compute Project. You can bet this is a response to Google open sourcing TensorFlow. TensorFlow won some big headlines. To continue attracting the big talent, Facebook must keep pace in the perception game.
But according to LeCun, there are bigger reasons for open sourcing Big Sur and other hardware designs. For one thing, this can help reduce the cost of the machines. If more companies start using the designs, manufacturers can build the machines at a lower cost. And in a larger sense, if more companies use the designs to do more AI work, it helps accelerate the evolution of deep learning as a whole—including software as well as hardware. So, yes, Facebook is giving away its secrets so that it can better compete with Google—and everyone else.