Machine learning algorithms are algorithms that recognize patterns. One algorithm, the deep neural network, has been particularly successful. It identifies faces, drives cars, and even beats human players at Go. This leads us to the question: can deep neural networks learn as well as humans?

Yann LeCun, NYU professor, Director of AI Research at Facebook, and a top researcher of deep neural networks, doesn’t think so. He argues that deep neural networks are unable to find patterns that they’re not explicitly told to look for. In order to make our own conclusions about deep neural networks’ learning capability, we’ll seek to answer three main questions:

  • How do deep neural networks work?
  • How have deep neural networks been applied?
  • How can deep neural networks be improved?

How Deep Neural Networks Work:

Every machine learning algorithm recognizes its own pattern type. As we’ll see, deep neural networks learn a particularly flexible pattern type. But first, let’s go over the three steps of teaching an algorithm to recognize patterns: 1) label the data, 2) decide on the pattern type, and 3) train the algorithm to optimize the parameters of its pattern. We’ll use the perceptron, the basic building block of a neural network, as our example.

For example, say we want to teach our perceptron to recognize faces. Specifically, we are asking it to label a picture as “face” or “not face”, given “features” from the picture, such as the number of eyes, cars, or colors. To learn, the algorithm needs “training” examples of pictures that have already been labeled “face” or “not face.”

Next, we choose the pattern type. This is the function that the algorithm will calculate to decide if the picture is a face. The perceptron’s pattern type is a weighted sum of its inputs, like “2*# eyes + # nose.” If the sum exceeds a threshold, it “fires”: it outputs a 1, saying “yes, this is a face”. Otherwise, it outputs a 0.

Finally, we train the perceptron. The algorithm looks at the pictures’ features, predicts whether it’s a face, and checks if it’s wrong. Then, it uses a method called “backpropagation” to update its weights to minimize its error.

Deep neural networks consist of layers of perceptrons, each layer taking inputs from the previous layer. By using backpropagation, every single layer of perceptrons can be trained. This gives the network a powerful pattern type. The layers break up the complexity of its task: one learns to recognize lines, while another uses the first to recognize depth. Note that the network does this without being told what lines and depth are! By training it on millions and millions of examples, and iteratively minimizing the error, the individual layers “learn” these “concepts.”

How Deep Neural Networks are Applied:

Deep neural networks have been used to recognize highly complex patterns. Google’s DeepMind neural network beat a master at Go by learning to recognize good board configurations, similar to the way that humans intuit good “positions” in chess. Note the network’s learning power is crucial, since there are more possible Go configurations than atoms in the universe!  Neural networks have also been used to predict breast cancer survival rates based off of structural features in tumor cells. A robot developed in a robotics lab at UC Berkeley learned to hang clothes on a rack using a deep neural network to find actions that led to the goal. And any current voice-recognition software, like Siri, uses deep neural networks to identify speech patterns.

Neural networks’ structure evokes that of the brain, leading many to think that they’ll bring the prophesied “singularity”, in which artificial intelligence self-improves to become smarter than humans themselves. Many leading AI researchers are less concerned about this due to two main barriers: to self-improve, a neural network would need to 1) acquire data in a foreign context and 2) learn from it. We don’t have 1, yet, since humans curate the type of data the network receives. For example, although DeepMind became a Go master by generating its own data and playing against itself, it was still limited to one type of data: Go games. We don’t have 2 either, since deep neural networks can’t yet learn from unfamiliar contexts. For example, the Berkeley Robot stopped hanging clothes as effectively when the researchers changed the background color of its line of sight.

How Deep Neural Networks Can be Improved:

According to leading deep learning researcher Yoshua Bengio, one major research area improving neural networks is unsupervised learning, or teaching a computer to find patterns without knowing what to look for. This is a key step toward having a computer be able to learn patterns in unfamiliar contexts, such as finding anomalies in credit card transactions.

Another obstacle is in learning speed. Deep neural networks operate over millions of examples and can take up to weeks or months to be trained. This makes it hard to test and improve them quickly.

Finally, deep learning is being improved so that it can be applied to increasingly complex tasks, such as reinforcement learning (deciding how to take actions), and symbolic manipulation (being able to reason according to rules).

Conclusion:

Let’s come back to our initial question: can deep neural networks learn as well as humans? We’ve found that deep neural networks recognize complex patterns by layering perceptrons. Yet they’re also quite slow, requiring millions of training examples in order to learn the right answer. Above all, they’re (currently) unable to do the unsupervised learning necessary to learn from data in an unknown context. Thus, deep neural networks are no more than a pair of eyes: sharp eyes, but eyes that need to be told what to look for.

About The Author