Making data science accessible – Neural Networks

Making data science accessible – Neural Networks

Making data science accessible – Neural Networks

By Dan Kellett, Director of Data Science, Capital One UK

Neural Networks are a family of Machine Learning techniques modelled on the human brain. Being able to extract hidden patterns within data is a key ability for any Data Scientist and Neural Network approaches may be especially useful for extracting patterns from images, video or speech. The following blog aims to explain at a high level how these methods work and key things to bear in mind.

The network consists of different components:

-       Input layer: this reflects the potential descriptive factors that may help in prediction.

-       Hidden layer: a user-defined number of layers with a specified number of neurons in each layer.

-       Output layer: this reflects the thing you are trying to predict. For example; this could be a labelling of an image or a more traditional 0/1 outcome

-       Weights: each neuron in a given layer is potentially connected to every neuron in adjacent layers - the weight sets the importance of this link. At first these weights should be randomized.

Read Also:
Teaching machines to understand video could be the key to giving them common sense

In a basic neural network, you train the system by running individual cases through one at a time and updating the weights based on the error. The aim is that over time the networks should become attuned to your data, minimizing error. This updating of weights in a basic neural network is an output of a two-way process using feed-forward and back-propagation techniques:

Feed-forward involves processing observations one-at-a-time through the network. Given the weights in place the model should produce a prediction and from this prediction and the actual outcome you can calculate the error in your model for that one observation.

Back-propagation involves taking that error back through the network to adjust the individual weights to better reflect the actual outcome. These new weights are then used for the next observation.

 



Enterprise Data World 2017

2
Apr
2017
Enterprise Data World 2017

$200 off with code 7WDATA

Read Also:
Teaching machines to understand video could be the key to giving them common sense
Read Also:
How Analytics Prevents Fraud

Data Visualisation Summit San Francisco

19
Apr
2017
Data Visualisation Summit San Francisco

$200 off with code DATA200

Read Also:
Moving machine learning from practice to production

Chief Analytics Officer Europe

25
Apr
2017
Chief Analytics Officer Europe

15% off with code 7WDCAO17

Read Also:
TPOT: A Python tool for automating data science

Chief Analytics Officer Spring 2017

2
May
2017
Chief Analytics Officer Spring 2017

15% off with code MP15

Read Also:
Smart Data Is a Bigger Priority Than Big Data for FinTech Companies

Big Data and Analytics for Healthcare Philadelphia

17
May
2017
Big Data and Analytics for Healthcare Philadelphia

$200 off with code DATA200

Read Also:
2016: The year AI got creative

Leave a Reply

Your email address will not be published. Required fields are marked *