Neural Networks

C

CoreyWhite

A friend of mine just was over at my house explaining Neural Networks
and I understood it as well as I could. Here is my own explination.

A neural network has to first run in a loop 1,000's of times given
it's input and output. It then naturally learns the simplest
algorithm to generate that input and output, using a networked matrix
of numbers that are run through a filter, and compared to the input.

The algorithm to train a neural network looks like this:

Start off with a random algorithm

Provide Input (input layer)

Multiply the random weights along with the input, and a static weight.
(hidden layer)

Run the weights through a Sigmoid function as a filter.

Take one of the weighted networks, after it has run through a sigmoid
function, and weight it again. (Output layer).

Now we calculate all of our networks to see how much they disagree
with the provided output.
Then we loop through the weights, changing them all to a better fit.

So each time we run this function it gets a little bit better. But
ultimately we are going to be getting output that is either closer to
zero or closer to one. The network just remembers at the final output
layer what is the closest to 0 and 1 it can get too.

IT'S REALLY COOL! This program below, makes its own binary XOR
function.


#include <math.h>

#include <stdlib.h>

#include <time.h>

#include <iostream.h>



#define BPM_ITER 2000

#define BP_LEARNING (float)(0.5) // The learning coefficient.



class CBPNet {

public:

CBPNet();

~CBPNet() {};



float Train(float, float, float);

float Run(float, float);



private:

float m_fWeights[3][3]; // Weights for the 3 neurons.



float Sigmoid(float); // The sigmoid function.

};



CBPNet::CBPNet() {

srand((unsigned)(time(NULL)));



for (int i=0;i<3;i++) {

for (int j=0;j<3;j++) {

// For some reason, the Microsoft rand() function

// generates a random integer. So, I divide by the

// number by MAXINT/2, to get a num between 0 and 2,

// the subtract one to get a num between -1 and 1.

m_fWeights[j] = (float)(rand())/(32767/2) - 1;

}

}

}



float CBPNet::Train(float i1, float i2, float d) {

// These are all the main variables used in the

// routine. Seems easier to group them all here.

float net1, net2, i3, i4, out;



// Calculate the net values for the hidden layer neurons.

net1 = 1 * m_fWeights[0][0] + i1 * m_fWeights[1][0] +

i2 * m_fWeights[2][0];

net2 = 1 * m_fWeights[0][1] + i1 * m_fWeights[1][1] +

i2 * m_fWeights[2][1];



// Use the hardlimiter function - the Sigmoid.

i3 = Sigmoid(net1);

i4 = Sigmoid(net2);



// Now, calculate the net for the final output layer.

net1 = 1 * m_fWeights[0][2] + i3 * m_fWeights[1][2] +

i4 * m_fWeights[2][2];

out = Sigmoid(net1);



// We have to calculate the deltas for the two layers.

// Remember, we have to calculate the errors backwards

// from the output layer to the hidden layer (thus the

// name 'BACK-propagation').

float deltas[3];



deltas[2] = out*(1-out)*(d-out);

deltas[1] = i4*(1-i4)*(m_fWeights[2][2])*(deltas[2]);

deltas[0] = i3*(1-i3)*(m_fWeights[1][2])*(deltas[2]);



// Now, alter the weights accordingly.

float v1 = i1, v2 = i2;

for(int i=0;i<3;i++) {

// Change the values for the output layer, if necessary.

if (i == 2) {

v1 = i3;

v2 = i4;

}



m_fWeights[0] += BP_LEARNING*1*deltas;

m_fWeights[1] += BP_LEARNING*v1*deltas;

m_fWeights[2] += BP_LEARNING*v2*deltas;

}



return out;

}



float CBPNet::Sigmoid(float num) {

return (float)(1/(1+exp(-num)));

}



float CBPNet::Run(float i1, float i2) {

// I just copied and pasted the code from the Train() function,

// so see there for the necessary documentation.



float net1, net2, i3, i4;



net1 = 1 * m_fWeights[0][0] + i1 * m_fWeights[1][0] +

i2 * m_fWeights[2][0];

net2 = 1 * m_fWeights[0][1] + i1 * m_fWeights[1][1] +

i2 * m_fWeights[2][1];



i3 = Sigmoid(net1);

i4 = Sigmoid(net2);



net1 = 1 * m_fWeights[0][2] + i3 * m_fWeights[1][2] +

i4 * m_fWeights[2][2];

return Sigmoid(net1);

}





void main() {

CBPNet bp;



for (int i=0;i<BPM_ITER;i++) {

bp.Train(0,0,0);

bp.Train(0,1,1);

bp.Train(1,0,1);

bp.Train(1,1,0);

}



cout << "0,0 = " << bp.Run(0,0) << endl;

cout << "0,1 = " << bp.Run(0,1) << endl;

cout << "1,0 = " << bp.Run(1,0) << endl;

cout << "1,1 = " << bp.Run(1,1) << endl;

}
 
C

CoreyWhite

It looks neurological ... hey look at this free book:http://www.relisoft.com/book/index.htm

The book looks fine, but I am on an iMac. I even have my iMac
keyboard plugged in today! I like it better than my IBM keyboard
now. You can't type as quick on an iMac keyboard, but it is easier to
avoid typos.

I'm not sure if you understand this Neural network yet. It begins
with a random matrix of numbers between 0 and 1. These are the
weights that form the basis of the algorithm. The basic algorithm
just multiplies the matrix of weights with our input, and then it runs
through a filter. The sigmoid filter can take even random information
and morph it into something straight along the range from 0 or 1. So
everything gets closer to our end goal. In the end the values are
just adjusted closer to what the output should be with a complicated
little Delta function, that basically subtracts the difference between
the weights and the output we know is what we want.

The final result is a fuzzy algorithm that can take guesses at what
the right answer is, even if it is given information it hasn't seen
before.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top