What deep learning really means

What deep learning really means

What deep learning really means

Perhaps the most positive technical theme of 2016 was the long-delayed triumph of artificial intelligence, machine learning, and in particular deep learning. In this article we'll discuss what that means and how you might make use of deep learning yourself.

Perhaps you noticed in the fall of 2016 that Google Translate suddenly went from producing, on the average, word salad with a vague connection to the original language to emitting polished, coherent sentences more often than not -- at least for supported language pairs, such as English-French, English-Chinese, and English-Japanese. That dramatic improvement was the result of a nine-month concerted effort by the Google Brain and Google Translate teams to revamp Translate from using its old phrase-based statistical machine translation algorithms to working with a neural network trained with deep learning and word embeddings employing Google's TensorFlow framework.

Was that magic? No, not at all: It wasn't even easy. The researchers working on the conversion had access to a huge corpus of translations from which to train their networks, but they soon discovered that they needed thousands of GPUs for training and would have to create a new kind of chip, a Tensor Processing Unit (TPU), to run Translate on their trained neural networks at scale. They also had to refine their networks hundreds of times as they tried to train a system that would be nearly as good as human translators.

Read Also:
Automated Predictive Analytics – What Could Possibly Go Wrong?

Do you need to be Google scale to take advantage of deep learning? Thanks to cloud offerings, the answer is an emphatic no. Not only can you run cloud VM and container instances with many CPU cores and large amounts of RAM, you can get access to GPUs, as well as prebuilt images that include deep learning software.

To grasp how deep learning works, you’ll need to understand a bit about machine learning and neural networks, which in effect are themselves defined by how they differ from conventional programming.

Conventional programming involves writing specific instructions for the computer to execute. For example, take the classic "Hello, World" program in the C programming language:

This program, when compiled and linked, does one thing: It prints the string "Hello, World" on the standard output port. It does only what the programmer told it to do, and it does the same thing every time it runs.

You may wonder how game programs sometimes give different outputs from the same inputs, such as swinging your character's ax at a dragon. That requires the use of a random number generator and a program that performs different actions based on the number returned by the generator:

Read Also:
Cybersecurity is the killer app for big data analytics

In other words, if we want a conventional program to vary statistically instead of behaving consistently, we have to program the variation. Machine learning turns that idea on its head.

In machine learning (ML), the essential task is to create a predictor of future outputs from some set of inputs. This is accomplished by training the predictor statistically from historical data.

If the value predicted is a real number, then you are solving a regression problem, such as "What will the price of MSFT stock be on Tuesday at noon?" The complete history of MSFT stock transactions is available for training, as are all the related stocks, news, and economic data that might correlate to the stock price.

If you are predicting a yes or no response, then you are solving a binary or two-class classification problem, such as "Will the price of MSFT stock go up between now and Tuesday at noon?" The corpus of data is the same as the regression problem, but the algorithms for optimizing the predictor will be different.

Read Also:
Things you need to know about Big Data

If you are predicting more than two classes, then you are solving a multiclass classification problem, such as "What's the best action for MSFT stock? Buy, sell, or hold?" Again, the corpus of data is the same, but the algorithms might be different.



Data Innovation Summit 2017

30
Mar
2017
Data Innovation Summit 2017

30% off with code 7wData

Read Also:
Things you need to know about Big Data

Big Data Innovation Summit London

30
Mar
2017
Big Data Innovation Summit London

$200 off with code DATA200

Read Also:
AWS Database Migration Service now available to help customers switch to the cloud

Enterprise Data World 2017

2
Apr
2017
Enterprise Data World 2017

$200 off with code 7WDATA

Read Also:
Big data transforming the online world

Data Visualisation Summit San Francisco

19
Apr
2017
Data Visualisation Summit San Francisco

$200 off with code DATA200

Read Also:
Things you need to know about Big Data

Chief Analytics Officer Europe

25
Apr
2017
Chief Analytics Officer Europe

15% off with code 7WDCAO17

Read Also:
Getting Off to a Great Start in Your Big Data Career

Leave a Reply

Your email address will not be published. Required fields are marked *