In the future imagined by science fiction, artificial intelligence will reign supreme and take over pretty much everything humans can do. Frankly this sci-fi vision isn’t helpful when it comes to applying technology because it distracts us from thinking about what people do well and what advanced techniques such as deep learning do well.
In the world of data science, great strides are being made in the area of deep learning. We’ve made so much progress that it is easy to think that instead of having to embrace data science as a discipline, we can somehow wait a little big longer and have a Watson-like box to perform all of these tasks for us. If you think this way, you are going to miss the boat. Here are three reasons why.
1. We Dole Out the Work Deep learning, and much of data science, is limited to a narrow set of tasks. Deep learning is the term that has come to describe the most advanced forms of machine learning, systems that attempt to perceive in data complex patterns using a variety of statistical and algorithmic techniques based on fairly raw data. Deep learning has proven effective at perceptual tasks and recognizing certain attributes, often called features, of images or other data sets. Much of the time you can boil down a huge amount of processing to a much simpler model that then allows you to predict something or find some signal that was hidden. Deep learning victories include language translation (such as Google Translate and Baidu Translate) and speech recognition (which powers Apple Siri and Google Now) as well as image recognition and even playing video games and flying model helicopters.
There is no reason not to be happy about the victories of deep learning. But so far, deep learning systems can do some specialized tasks well, but only under certain circumstances. In a fascinating blog post, Zachary Chase Lipton surveyed a variety of papers that pointed out the flaws in deep learning systems. It turns out that they are often brittle and easily fooled. The key lies in understanding when a certain technique works and when it doesn’t.
Driverless cars don’t know where to go or why. Humans are needed to provide context, to frame the problem, to generate the hypothesis, and to decide what deep learning or data science to apply.