Neural Networks

Discussion in 'C++' started by CoreyWhite, Apr 7, 2007.

  1. CoreyWhite

    CoreyWhite Guest

    A friend of mine just was over at my house explaining Neural Networks
    and I understood it as well as I could. Here is my own explination.

    A neural network has to first run in a loop 1,000's of times given
    it's input and output. It then naturally learns the simplest
    algorithm to generate that input and output, using a networked matrix
    of numbers that are run through a filter, and compared to the input.

    The algorithm to train a neural network looks like this:

    Start off with a random algorithm

    Provide Input (input layer)

    Multiply the random weights along with the input, and a static weight.
    (hidden layer)

    Run the weights through a Sigmoid function as a filter.

    Take one of the weighted networks, after it has run through a sigmoid
    function, and weight it again. (Output layer).

    Now we calculate all of our networks to see how much they disagree
    with the provided output.
    Then we loop through the weights, changing them all to a better fit.

    So each time we run this function it gets a little bit better. But
    ultimately we are going to be getting output that is either closer to
    zero or closer to one. The network just remembers at the final output
    layer what is the closest to 0 and 1 it can get too.

    IT'S REALLY COOL! This program below, makes its own binary XOR
    function.


    #include <math.h>

    #include <stdlib.h>

    #include <time.h>

    #include <iostream.h>



    #define BPM_ITER 2000

    #define BP_LEARNING (float)(0.5) // The learning coefficient.



    class CBPNet {

    public:

    CBPNet();

    ~CBPNet() {};



    float Train(float, float, float);

    float Run(float, float);



    private:

    float m_fWeights[3][3]; // Weights for the 3 neurons.



    float Sigmoid(float); // The sigmoid function.

    };



    CBPNet::CBPNet() {

    srand((unsigned)(time(NULL)));



    for (int i=0;i<3;i++) {

    for (int j=0;j<3;j++) {

    // For some reason, the Microsoft rand() function

    // generates a random integer. So, I divide by the

    // number by MAXINT/2, to get a num between 0 and 2,

    // the subtract one to get a num between -1 and 1.

    m_fWeights[j] = (float)(rand())/(32767/2) - 1;

    }

    }

    }



    float CBPNet::Train(float i1, float i2, float d) {

    // These are all the main variables used in the

    // routine. Seems easier to group them all here.

    float net1, net2, i3, i4, out;



    // Calculate the net values for the hidden layer neurons.

    net1 = 1 * m_fWeights[0][0] + i1 * m_fWeights[1][0] +

    i2 * m_fWeights[2][0];

    net2 = 1 * m_fWeights[0][1] + i1 * m_fWeights[1][1] +

    i2 * m_fWeights[2][1];



    // Use the hardlimiter function - the Sigmoid.

    i3 = Sigmoid(net1);

    i4 = Sigmoid(net2);



    // Now, calculate the net for the final output layer.

    net1 = 1 * m_fWeights[0][2] + i3 * m_fWeights[1][2] +

    i4 * m_fWeights[2][2];

    out = Sigmoid(net1);



    // We have to calculate the deltas for the two layers.

    // Remember, we have to calculate the errors backwards

    // from the output layer to the hidden layer (thus the

    // name 'BACK-propagation').

    float deltas[3];



    deltas[2] = out*(1-out)*(d-out);

    deltas[1] = i4*(1-i4)*(m_fWeights[2][2])*(deltas[2]);

    deltas[0] = i3*(1-i3)*(m_fWeights[1][2])*(deltas[2]);



    // Now, alter the weights accordingly.

    float v1 = i1, v2 = i2;

    for(int i=0;i<3;i++) {

    // Change the values for the output layer, if necessary.

    if (i == 2) {

    v1 = i3;

    v2 = i4;

    }



    m_fWeights[0] += BP_LEARNING*1*deltas;

    m_fWeights[1] += BP_LEARNING*v1*deltas;

    m_fWeights[2] += BP_LEARNING*v2*deltas;

    }



    return out;

    }



    float CBPNet::Sigmoid(float num) {

    return (float)(1/(1+exp(-num)));

    }



    float CBPNet::Run(float i1, float i2) {

    // I just copied and pasted the code from the Train() function,

    // so see there for the necessary documentation.



    float net1, net2, i3, i4;



    net1 = 1 * m_fWeights[0][0] + i1 * m_fWeights[1][0] +

    i2 * m_fWeights[2][0];

    net2 = 1 * m_fWeights[0][1] + i1 * m_fWeights[1][1] +

    i2 * m_fWeights[2][1];



    i3 = Sigmoid(net1);

    i4 = Sigmoid(net2);



    net1 = 1 * m_fWeights[0][2] + i3 * m_fWeights[1][2] +

    i4 * m_fWeights[2][2];

    return Sigmoid(net1);

    }





    void main() {

    CBPNet bp;



    for (int i=0;i<BPM_ITER;i++) {

    bp.Train(0,0,0);

    bp.Train(0,1,1);

    bp.Train(1,0,1);

    bp.Train(1,1,0);

    }



    cout << "0,0 = " << bp.Run(0,0) << endl;

    cout << "0,1 = " << bp.Run(0,1) << endl;

    cout << "1,0 = " << bp.Run(1,0) << endl;

    cout << "1,1 = " << bp.Run(1,1) << endl;

    }
     
    CoreyWhite, Apr 7, 2007
    #1
    1. Advertising

  2. CoreyWhite

    David Harmon Guest

    On 6 Apr 2007 22:07:13 -0700 in comp.lang.c++, "CoreyWhite"
    <> wrote,
    >Newsgroups: alt.magick,alt.native,comp.lang.c++,alt.2600
     
    David Harmon, Apr 7, 2007
    #2
    1. Advertising

  3. CoreyWhite

    boson boss Guest

    boson boss, Apr 7, 2007
    #3
  4. CoreyWhite

    CoreyWhite Guest

    On Apr 7, 5:55 am, "boson boss" <> wrote:
    > It looks neurological ... hey look at this free book:http://www.relisoft.com/book/index.htm


    The book looks fine, but I am on an iMac. I even have my iMac
    keyboard plugged in today! I like it better than my IBM keyboard
    now. You can't type as quick on an iMac keyboard, but it is easier to
    avoid typos.

    I'm not sure if you understand this Neural network yet. It begins
    with a random matrix of numbers between 0 and 1. These are the
    weights that form the basis of the algorithm. The basic algorithm
    just multiplies the matrix of weights with our input, and then it runs
    through a filter. The sigmoid filter can take even random information
    and morph it into something straight along the range from 0 or 1. So
    everything gets closer to our end goal. In the end the values are
    just adjusted closer to what the output should be with a complicated
    little Delta function, that basically subtracts the difference between
    the weights and the output we know is what we want.

    The final result is a fuzzy algorithm that can take guesses at what
    the right answer is, even if it is given information it hasn't seen
    before.
     
    CoreyWhite, Apr 7, 2007
    #4
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Replies:
    0
    Views:
    449
  2. Replies:
    0
    Views:
    696
  3. I. Myself
    Replies:
    0
    Views:
    340
    I. Myself
    Apr 29, 2005
  4. Rash
    Replies:
    4
    Views:
    1,815
    memaiaf
    Jan 12, 2010
  5. ruchir

    Neural networks in python

    ruchir, Oct 8, 2009, in forum: Python
    Replies:
    2
    Views:
    887
    ruchir
    Oct 8, 2009
Loading...

Share This Page