This is a single-layer neural network with three input neurons and the Sign
activation function. The set of inputs includes bias, which always fires one and two input variables. The learning starts from random weights on the inputs. Then the row (pattern) from the truth table is chosen at random. If the output of the network is correct then the weights are not adjusted.
In case the output does not match the true value, the weights are corrected according to the following rule:
is the learning rate,
is the input,
is the input's weight, the
output is given by the network, and
is the true output of the logical operator given the supplied input set.
The cycles start again and another pattern is chosen. There is no convergence criterion and a stopping rule of two hundred iterations is used.
The process of convergence of the weights on the inputs is shown at the top. The bottom-right graph illustrates a hyperplane where red and black dots are used to mark false and true statements, respectively. The line separates red and black dots, thus showing that the weights are estimated correctly (except in case of XOR).
Also, the truth table is shown, adding the first column, which is a bias term, while the last column is an output of the logical operator applied to two inputs.
The bottom-left graph provides the structure of the network with the estimates of the weights.