Intuiting Neural Networks
NNs are similar to SVMs in that they project the data to a higher-dimensional space and fit a hyperplane to the data in the projected space. However, whereas SVMs use a predetermined kernel to project the data, NNs automatically construct their own projection.
This post is part of the series Intuiting Predictive Algorithms.
Want to get notified about new posts? Join the mailing list and follow on X/Twitter.
Neural Networks (NNs) consist of layers of “neurons,” where each neuron has an “activity” that is computed as a function of the weighted sum of activities of neurons in the previous layer. The first layer of neurons are activated directly from the data, and the activation of a particular neuron in the last layer represents the likelihood of the data belonging to a particular class. NNs are similar to SVMs in that they project the data to a higher-dimensional space and fit a hyperplane to the data in the projected space. However, whereas SVMs use a predetermined kernel to project the data, NNs automatically construct their own projection by iteratively adjusting (“training”) the weights in the intermediate (“hidden”) layers to minimize a loss function. Unlike with the kernel trick in SVMs, the training of additional layers in NNs incurs significant additional computational cost, and much work has been devoted to optimizing algorithms and hardware usage to speed up the training of Deep Neural Networks (DNNs) consisting of many hidden layers.
Each layer $\ell$ of a NN consists of a parameter matrix $W_\ell,$ where the $i$th row vector contains the weights received by the $i$th neuron in the next layer. If we define $f_\ell$ as the activation function that is applied component-wise (i.e. neuron-wise) at the $\ell$th layer, and include a bias term as a neuron in each layer whose activity is always 1, then the output activities of the network $\ell$ layers after the first layer is given by
We can write this recursively as
When counting layers, we do not count the first layer because it reads in the data and is therefore not associated with trainable weight parameters. This way, each layer is associated to a weight matrix.
For a regression network with $n$ layers, a loss function $L(o_n)$ is chosen to compare the output $o_n(x_i)$ to the desired target $y_i$ from the data. Common choices for this loss function include L1 and L2 error. For a classification network with $n$ layers, we normalize the output to $\widehat{o_n}(x_i) = o_n(x_i)/\vert\vert o_n(x_i) \vert\vert$ so that we can interpret it as a probability. Then, a loss function $L(\widehat{o_n})$ is chosen to compare the discrepancy between the output $\widehat{o_n}(x_i)$ and the desired target (“ground truth”) $y_i$ from the data. For classification, the loss function is usually chosen as the cross-entropy. For each data point, the cross entropy is given by
To make sense of the cross-entropy, notice that it can be simplified to
and if the ground truth is a single class $c,$ i.e. $y_{ic}=1$ and $y_{ij \neq c}=0,$ then it becomes
The training algorithm, called gradient descent, consists of iteratively updating the weights at each layer according to
where $\alpha$ is the learning parameter that governs how quickly the weights change. There are other variations of gradient descent, such as stochastic gradient descent (SGD), where the learning parameter is randomized to assist the weights in breaking out of a shallow minima while allowing them to settle into a deeper minima, and SGD with momentum, which is meant to mimic the trajectory of a ball rolling down a bumpy hill into a valley. The main problem in all of these methods, though, is computing the derivative (“gradient”) of the loss function with respect to the weights. Luckily, there is a pattern to it, which we will see after computing the gradient for $\ell = n, n-1, n-2.$
Computing for $\ell = n,$ we have
Where the $(\cdot)$ operation represents the Hadamard product. We define
so that
Computing for $\ell = n-1,$ we have
We define
so that
Computing for $\ell = n-2,$ we have
We define
so that
Putting it all together, we have that
where
This method for computing the gradients is called “backpropagation,” because we propagate the $\delta_\ell$ terms backwards through the layers, from the last layer to the first layer.
The equations for backpropagation also give us insight into our choice of activation function. If we choose a sigmoidal activation function that levels off, then the gradient will vanish for neurons whose activations are too large in magnitude. But if we choose a linear activation function that maintains a slope of 1 everywhere, then we have nothing more than a linear model, and the network is unable to project the data into a higher-dimensional space before fitting the hyperplane. The solution is to use a “rectified” linear unit (ReLU) which is linear for positive inputs, and zero for negative inputs:
Ideally, we’d use a “softmax” function which is differentiable at zero unlike ReLUs, but ReLUs are so much faster to compute that we use them anyway. We can usually get away with the slope being zero for negative inputs to the ReLU because the weighted sums in the network tend to be positive sometimes. However, if we set the learning rate too high, we can sometimes end up with neurons whose weighted sums are always negative, and consequently whose gradients and activity are always zero. To overcome this problem of “dead” neurons, one can use leaky ReLUs which have a small gradient and activity even for negative inputs:
That being said, ReLUs may not be the best choice for the output layer of the network, which is supposed to represent a regression or classification prediction. For regressions, linear activation functions are a better choice in the final layer, and for classifications, softmax units are a better choice in the final layer.
Due to the large number of parameters in NNs, they are prone to overfitting. However, the risk of overfitting can be reduced by “dropout,” a method used to avoid training all of the weights on all of the training data. Dropout involves randomly turning of or “dropping out” neurons from the network during each training iteration, and then keeping the weights of those neurons unchanged during the weight update. Dropout also increases training speed, since dropping out half the neurons in a network cuts the number of computations in half.
One type of neural network that has seen widespread success in the realm of image processing is the convolutional neural network (CNN), which reduces the number of parameters (thus enabling deep networks of many layers) by taking advantage of spatially local input patterns. In CNNs, each layer of neurons is really a stack of sub-layers, and each neuron in a sub-layer is connected to only a small region (“receptive field”) of a single sub-layer in the preceding layer. Receptive field weights are shared across neurons within a sub-layer, thus forming a template (“convolution”) that can be interpreted as the pattern of activation that the sub-layer is trained to detect within the sub-layer in the preceding layer. A sub-layer’s convolution can be expressed as a weighted sum of different offsets of the convolution in the sub-layer in the preceding layer, and by carrying the weighted sum through all the layers down to the input layer, one can see the visual feature that the sub-layer is trained to detect within the image. Visual features of sub-layers within lower layers are usually simple, like lines and edges, whereas visual features of sub-layers within higher layers can be complex, like faces or cars.
This post is part of the series Intuiting Predictive Algorithms.
Want to get notified about new posts? Join the mailing list and follow on X/Twitter.