Artificial Neural Network Example

This illustrates how a neural network with two input, two hidden, and two output neurons implements the "forward pass" (a prediction) and the "backwards pass" (so-called backpropagation). [br][br]The input neurons (training data), [math]i_1[/math] and [math]i_2[/math], are on the left, and the output neurons (labels), [math]o_1[/math] and [math]o_2[/math], are on the right. The biases, [math]b_1[/math] and [math]b_2[/math], are attached to the hidden and output layer. The weights, [math]w_i[/math], are attached to segments as appropriate, and you can adjust them with the sliders. [br][br]The forward pass (and associated error) is calculated dynamically based on the inputs, weights, biases and activation logistics. Roughly speaking, information from the left moves "forward" to the right. When information flows through weights, [math]w_i[/math]. the function is affine, [math]w_ix+w_ky+b_i[/math]. Other left to right motion is either an activation logistic [math]\frac{1}{1+e^{-x}}[/math] or error calculations between the calculated output, [math]out_{oi}[/math], and the target, [math]o_i[/math], as noted by labels in the network. [br][br]You need to take action to implement a "backward pass". Simply press "Backpropagate". When you do so, in the background a number of partial derivatives of [math]E_{total}[/math] with respect to the weights,[math]w_i[/math], are calculated. These partial derivatives are then used to "navigate" to a slightly "better" selection of weights (in the sense that they reduce the error) by way of so-called gradient descent. Although the details are a little tricky (link to an explanation is below), the effect of this is to improve the calculated outputs, and so too, reduce [math]E_{total}[/math], although probably just a little bit.[br][br]Pay very close attention to this little bit of error reduction by way of backpropagation. Reduction in [math]E_{total}[/math] as a response to backpropagation is the [b]whole point of an artificial neural network[/b] structure. This reduction in the error is the network "learning" how to select better weights to predict the outputs based on the given inputs. At first the results aren't that impressive, but upon iterative backpropagation, the network can learn a fairly good set of weights. [br][br]In case it helps you think this through, the weight segments are color coded and thickened based on their values. Green indicates a positive weight, red is negative. Thicker is larger in absolute value. [br][br]All the gory details of the arithmetic behind the forward pass and the backwards pass (and the partial derivatives!) can be found in this truly wonderful article: [url=https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/]https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/[/url] [br][br]If you download the source file and dig into it, all partial derivatives implementing backpropagation in this file are named with subscript convention in a way that matches the notation of the above article.[br][br]Enjoy! If you want to talk more, feel free to email me at greg.petrics [at] northernvermont.edu

Information: Artificial Neural Network Example