Illustration of the architecture of the back propagation neural network (BPNN). The BPNN repeatedly adjusts the weights, w, of the network and the threshold of each neuron (grey circles). The two steps of the learning process include forward propagation, where the predicted output value, O, is compared to the actual output value, y, and backward propagation, where the error, E, from this comparison is back propagated by updating the weights using a scaled conjugated gradient descent algorithm. The η (0 <η < 1) is a constant controlling the convergence rate of the algorithm. A six-layered network approach and a sequential mode for training were used in all simulations, keeping η = 0.4.