With plenty of machine learning tools currently available, why would you ever choose an artificial neural network over all the rest? This clip and the next could open your eyes to their awesome capabilities! You’ll get a closer look at neural nets without any of the math or code – just what they are and how they work. Soon you’ll understand why they are such a powerful tool!
Deep Learning TV on
Deep Learning is primarily about neural networks, where a network is an interconnected web of nodes and edges. Neural nets were designed to perform complex tasks, such as the task of placing objects into categories based on a few attributes. This process, known as classification, is the focus of our series.
Classification involves taking a set of objects and some data features that describe them, and placing them into categories. This is done by a classifier which takes the data features as input and assigns a value (typically between 0 and 1) to each object; this is called firing or activation; a high score means one class and a low score means another. There are many different types of classifiers such as Logistic Regression, Support Vector Machine (SVM), and Naïve Bayes. If you have used any of these tools before, which one is your favorite? Please comment.
Neural nets are highly structured networks, and have three kinds of layers – an input, an output, and so called hidden layers, which refer to any layers between the input and the output layers. Each node (also called a neuron) in the hidden and output layers has a classifier. The input neurons first receive the data features of the object. After processing the data, they send their output to the first hidden layer. The hidden layer processes this output and sends the results to the next hidden layer. This continues until the data reaches the final output layer, where the output value determines the object’s classification. This entire process is known as Forward Propagation, or Forward prop. The scores at the output layer determine which class a set of inputs belongs to.
Michael Nielsen’s book – http://neuralnetworksanddeeplearning.com/
Andrew Ng Machine Learning – https://www.coursera.org/learn/machine-learning
Andrew Ng Deep Learning – https://www.coursera.org/specializations/deep-learning
Have you worked with neural nets before? If not, is this clear so far? Please comment.
Neural nets are sometimes called a Multilayer Perceptron or MLP. This is a little confusing since the perceptron refers to one of the original neural networks, which had limited activation capabilities. However, the term has stuck – your typical vanilla neural net is referred to as an MLP.
Before a neuron fires its output to the next neuron in the network, it must first process the input. To do so, it performs a basic calculation with the input and two other numbers, referred to as the weight and the bias. These two numbers are changed as the neural network is trained on a set of test samples. If the accuracy is low, the weight and bias numbers are tweaked slightly until the accuracy slowly improves. Once the neural network is properly trained, its accuracy can be as high as 95%.
Nickey Pickorita (YouTube art) –
Isabel Descutner (Voice) –
Dan Partynski (Copy Editing) –
Jagannath Rajagopal (Creator, Producer and Director) –