Why Does Deep and Cheap Learning Work So Well?

Why Does Deep and Cheap Learning Work So Well?

Why Does Deep and Cheap Learning Work So Well?
The recent paper at hand approaches explaining deep learning from a different perspective, that of physics, and discusses the role of “cheap learning” (parameter reduction) and how it relates back to this innovative perspective.

Why does deep learning work so well? And… cheap learning?

A recent paper by Henry W. Lin (Harvard) and Max Tegmark (MIT), titled “Why does deep and cheap learning work so well?” looks to examine from a different perspective what it is about deep learning that makes it work so well. It also introduces (at least, to me) the term “cheap learning.”

First off, to be clear, “cheap learning” does not refer to using a low end GPU; instead, the following explains its relationship to parameter reduction:

This central idea of this paper is that neural network success owes as much to physics as it does to mathematics (perhaps more), and that simplistic physics functions owing to concepts such as symmetry, locality, compositionality, and polynomial log-probability can be viewed similarly to deep learning’s relationship with the reality which it seeks to model. You may have heard something about this in September; this is the paper on which said news was based.

Read Also:
IoT is the ‘new industrial revolution,’ says Vodafone

More from the abstract:

We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one.

Read Full Story…

 

Leave a Reply

Your email address will not be published. Required fields are marked *