What is the Difference Between Deep Learning and “Regular” Machine Learning?

What is the Difference Between Deep Learning and “Regular” Machine Learning?

Another concise explanation of a machine learning concept by Sebastian Raschka. This time, Sebastian explains the difference between Deep learning and "regular" machine learning.

That's an interesting question, and I try to answer this is a very general way. The tl;dr version of this is: Deep learning is essentially a set of techniques that help we to parameterize deep neural network structures, neural networks with many, many layers and parameters.

And if we are interested, a more concrete example: Let's start with multi-layer perceptrons (MLPs)...

On a tangent: The term "perceptron" in MLPs may be a bit confusing since we don't really want only linear neurons in our network. Using MLPs, we want to learn complex functions to solve non-linear problems. Thus, our network is conventionally composed of one or multiple "hidden" layers that connect the input and output layer. Those hidden layers normally have some sort of sigmoid activation function (log-sigmoid or the hyperbolic tangent etc.). For example, think of a log-sigmoid unit in our network as a logistic regression unit that returns continuous values outputs in the range 0-1. A simple MLP could look like this

Read Also:
Doug Cutting Reflects on Hadoop’s Impact, Future

where y_hat is the final class label that we return as the prediction based on the inputs (x) if this are classification tasks. The "a"s are our activated neurons and the "w"s are the weight coefficients. Now, if we add multiple hidden layers to this MLP, we'd also call the network "deep." The problem with such "deep" networks is that it becomes tougher and tougher to learn "good" weights for this network. When we start training our network, we typically assign random values as initial weights, which can be terribly off from the "optimal" solution we want to find. During training, we then use the popular backpropagation algorithm (think of it as reverse-mode auto-differentiation) to propagate the "errors" from right to left and calculate the partial derivatives with respect to each weight to take a step into the opposite direction of the cost (or "error") gradient. Now, the problem with deep neural networks is the so-called "vanishing gradient" -- the more layers we add, the harder it becomes to "update" our weights because the signal becomes weaker and weaker. Since our network's weights can be terribly off in the beginning (random initialization) it can become almost impossible to parameterize a "deep" neural network with backpropagation.

Read Also:
Ten “Design First” Principles from Strata/Hadoop World NYC

Now, this is where "deep learning" comes into play. Roughly speaking, we can think of deep learning as "clever" tricks or algorithms that can help we with the training of such "deep" neural network structures.

 



Sentiment Analysis Symposium

27
Jun
2017
Sentiment Analysis Symposium

15% off with code 7WDATA

Read Also:
Ten “Design First” Principles from Strata/Hadoop World NYC

Data Analytics and Behavioural Science Applied to Retail and Consumer Markets

28
Jun
2017
Data Analytics and Behavioural Science Applied to Retail and Consumer Markets

15% off with code 7WDATA

Read Also:
Turning Big Data from Cost to Revenue

AI, Machine Learning and Sentiment Analysis Applied to Finance

28
Jun
2017
AI, Machine Learning and Sentiment Analysis Applied to Finance

15% off with code 7WDATA

Read Also:
What’s Next for Big Data?

Real Business Intelligence

11
Jul
2017
Real Business Intelligence

25% off with code RBIYM01

Read Also:
Turning Big Data from Cost to Revenue

Advanced Analytics Forum

20
Sep
2017
Advanced Analytics Forum

15% off with code Discount15

Read Also:
More Firms Embracing Streaming Analytics, Machine Learning
Read Also:
Why you need a data hub for future-proofed BI

Leave a Reply

Your email address will not be published. Required fields are marked *