Header Ads

Seo Services

Back propagation algorithm in neural network XOR example

For better understanding, the back propagation learning algorithm can be divided into two phases: propagation and weight update.
Phase 1: Propagation
Each propagation involves the following steps:
Forward propagation of a training pattern's input through the neural network in order to generate the propagation's output activations.
Backward propagation of the propagation's output activations through the neural network using the training pattern's target in order to generate the deltas of all output and hidden neurons.


Phase 2: Weight update
For each weight-synapse follow the following steps:
Multiply its output delta and input activation to get the gradient of the weight.
Bring the weight in the opposite direction of the gradient by subtracting a ratio of it from the weight.
This ratio infl uences the speed and quality of learning; it is called the learning rate. The sign of the gradient of a weight indicates where the error is increasing, this is why the weight must be updated in the opposite direction.

Repeat phase 1 and 2 until the performance of the network is satisfactory.
There are two modes of learning to choose from, One is on-line(incremental) learning and the other is batch learning. In on-line(incremental) learning, each propagation is followed immediately by a weight update. In batch learning, many propagation's occur before weight updating occurs. Batch learning requires more memory capacity, but on-line learning requires more updates.

Example - XOR

The formula of sigmoid activation is:f(x) =1/(1 + e to the power -input)

Patterns to be learned:
input    target
0 1        0
1 1        1
First, the weight values are set to random values: 0.62, 0.42, 0.55, -0.17 for weight matrix 1 and 0.35, 0.81 for weight matrix 2.
The learning rate of the net is set to 0.25.
Next, the values of the first input pattern (0 1) are set to the neurons of the input layer (the output of the input layer is the same as its input).
The neurons in the hidden layer are activated:
Input of hidden neuron 1: 0 * 0.62 + 1 * 0.55 = 0.55
Input of hidden neuron 2: 0 * 0.42 + 1 * (-0.17) = -0.17
Output of hidden neuron 1: 1 / ( 1 + exp(-0.55) ) = 0.634135591
Output of hidden neuron 2: 1 / ( 1 + exp(+0.17) ) = 0.457602059
The neurons in the output layer are activated:
Input of output neuron: 0.634135591 * 0.35 + 0.457602059 * 0.81 = 0.592605124
Output of output neuron: 1 / ( 1 + exp(-0.592605124) ) = 0.643962658
Compute an error value by
subtracting output from target: 0 - 0.643962658 = -0.643962658
Now that we got the output error, let's do the backpropagation.
We start with changing the weights in weight matrix 2:
Value for changing weight 1: 0.25 * (-0.643962658) * 0.634135591 * 0.643962658 * (1-0.643962658) = -0.023406638
Value for changing weight 2: 0.25 * (-0.643962658) * 0.457602059 * 0.643962658 * (1-0.643962658) = -0.016890593
Change weight 1: 0.35 + (-0.023406638) = 0.326593362
Change weight 2: 0.81 + (-0.016890593) = 0.793109407
Now we will change the weights in weight matrix 1:
Value for changing weight 1: 0.25 * (-0.643962658) * 0 * 0.634135591 * (1-0.634135591) = 0
Value for changing weight 2: 0.25 * (-0.643962658) * 0 * 0.457602059 * (1-0.457602059) = 0
Value for changing weight 3: 0.25 * (-0.643962658) * 1 * 0.634135591 * (1-0.634135591) = -0.037351064
Value for changing weight 4: 0.25 * (-0.643962658) * 1 * 0.457602059 * (1-0.457602059) = -0.039958271
Change weight 1: 0.62 + 0 = 0.62 (not changed)
Change weight 2: 0.42 + 0 = 0.42 (not changed)

No comments:

Powered by Blogger.