Two implementations of simple math equation yield different results

Discussion in 'C++' started by Avi, May 13, 2008.

  1. Avi

    Avi Guest

    I need to implement the following calculation:
    f = (a*b) + (c*d)
    where a,b,c,d are given double values and f is a double variable for
    the result

    I found out that using two different implementations gives two
    different results:
    1) In the first implementation I compute the entire equation in memory
    2) In the second implementation I store intermediate to variables, and
    then load them to compute the final result.
    The results differ by a tiny amount (1E-15). I suspect that the reason
    for the difference is from round off errors of the compiler.
    Nonetheless, I need to know the reason for the difference as I need to
    have an absolute reference for optimization work.

    The two ways are as follows:

    1)
    Calculating two double values and then summing them to a single double
    value
    double f1 = (a*b) + (c*d)

    2)
    Calculating two double values and storing them to two double variables
    Summing the two double variables to a single double value
    double e1 = (a*b);
    double e2 = (c*d);
    double f2 = e1 + e2;

    The difference is calculated as follows:
    double diff = f2-f1;

    I expect the difference to be exactly 0 but instead, I get a tiny
    value.
    Could someone explain the reason for differences in the result?

    I’m compiling with gcc

    Thanks
    Avi
     
    Avi, May 13, 2008
    #1
    1. Advertising

  2. Avi

    James Kanze Guest

    Re: Two implementations of simple math equation yield differentresults

    On May 13, 7:16 am, Avi <> wrote:
    > I need to implement the following calculation:
    > f = (a*b) + (c*d)
    > where a,b,c,d are given double values and f is a double
    > variable for the result


    > I found out that using two different implementations gives two
    > different results:
    > 1) In the first implementation I compute the entire equation in memory


    You mean in CPU and registers. Variables are in memory.

    > 2) In the second implementation I store intermediate to
    > variables, and then load them to compute the final result.


    > The results differ by a tiny amount (1E-15). I suspect that
    > the reason for the difference is from round off errors of the
    > compiler.


    The C++ standard allows a compiler to use extended precision for
    intermediate results, only "forcing" the result to the final
    precision when it is assigned to a variable.

    > Nonetheless, I need to know the reason for the difference as I
    > need to have an absolute reference for optimization work.


    > The two ways are as follows:


    > 1)
    > Calculating two double values and then summing them to a
    > single double value
    > double f1 = (a*b) + (c*d)


    All calculations may be done in extended precision.

    > 2)
    > Calculating two double values and storing them to two double variables
    > Summing the two double variables to a single double value
    > double e1 = (a*b);


    This forces the result to double.

    > double e2 = (c*d);


    As does this as well.

    > double f2 = e1 + e2;


    > The difference is calculated as follows:
    > double diff = f2-f1;


    > I expect the difference to be exactly 0 but instead, I get a
    > tiny value. Could someone explain the reason for differences
    > in the result?


    It's very implementation dependent.

    > I'm compiling with gcc


    More to the point, you're compiling on a machine which uses
    extended precision in its floating point unit, and its floating
    point registers. (An Intel, perhaps.)

    The reason for this freedom is speed. Forcing the results of
    each operation to be rounded to double can be done, but would
    slow things down considerably (at least on Intel). And for the
    most part, increased precision is not considered a defect.

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
     
    James Kanze, May 13, 2008
    #2
    1. Advertising

  3. Avi

    Greg Herlihy Guest

    Re: Two implementations of simple math equation yield differentresults

    On May 13, 2:42 am, James Kanze <> wrote:
    > On May 13, 7:16 am, Avi <> wrote:
    > > double f2 = e1 + e2;
    > > The difference is calculated as follows:
    > > double diff = f2-f1;
    > > I expect the difference to be exactly 0 but instead, I get a
    > > tiny value.  Could someone explain the reason for differences
    > > in the result?

    >
    > It's very implementation dependent.
    >
    > > I'm compiling with gcc

    >
    > More to the point, you're compiling on a machine which uses
    > extended precision in its floating point unit, and its floating
    > point registers.  (An Intel, perhaps.)
    >
    > The reason for this freedom is speed.  Forcing the results of
    > each operation to be rounded to double can be done, but would
    > slow things down considerably (at least on Intel).  And for the
    > most part, increased precision is not considered a defect.


    But the extended floating point precision is a defect if - as in this
    case - consistent numeric results are required. Fortunately, all Intel
    x86 chips since the Pentium 3 support double precision (64-bit)
    floating point in their SSE units. So, the solution would be to have
    the compiler generate SSE floating point instructions ("-msse2 -
    mfpmath=sse") instead of 387 instructions. According to the gcc
    documentation, this strategy has other benefits as well:

    "The resulting code should be considerably faster in the majority of
    cases and avoid the numerical instability problems of 387 code, but
    may break some existing code that expects temporaries to be 80-bit."

    Greg
     
    Greg Herlihy, May 13, 2008
    #3
  4. Avi

    Avi Guest

    Re: Two implementations of simple math equation yield differentresults

    One way to identify and ignore small differences is to set a small
    threshold, say:
    double epsilon = 1E-6
    and compare the difference against the threshold as follows:

    if (abs(diff) < epsilon)
    {
    // difference is considered as noise
    }
    {
    // difference is significant
    }

    I’m looking for a predefined threshold that the differences can be
    measured against.
    For example, the difference between 1 and the smallest value greater
    than 1 that is representable for the double data type is 2.22045e-16.
    I get it using: numeric_limits<double>::epsilon()

    The differences that I’m seeing between the two implementations (in
    the order of 1-E15) are bigger than the
    numeric_limits<double>::epsilon() value (2.22045e-16).

    I understand that the accumulated round off errors can be bigger than
    the numeric_limits<double>::epsilon() value.

    Is there a systematic threshold that can be used to define when the
    differences are noise and when they are significant?
    Or should I just pick a number which is small enough in my mind (say
    epsilon = 1 E-6)?

    Thanks,
    Avi
     
    Avi, May 13, 2008
    #4
  5. Avi

    Avi Guest

    Re: Two implementations of simple math equation yield differentresults

    One way to identify and ignore small differences is to set a small
    threshold, say:
    double epsilon = 1E-6
    and compare the difference against the threshold as follows:

    if (abs(diff) < epsilon)
    {
    // difference is considered as noise
    }
    {
    // difference is significant
    }

    I’m looking for a predefined threshold that the differences can be
    measured against.
    For example, the difference between 1 and the smallest value greater
    than 1 that is representable for the double data type is 2.22045e-16.
    I get it using: numeric_limits<double>::epsilon()

    The differences that I’m seeing between the two implementations (in
    the order of 1-E15) are bigger than the
    numeric_limits<double>::epsilon() value (2.22045e-16).

    I understand that the accumulated round off errors can be bigger than
    the numeric_limits<double>::epsilon() value.

    Is there a systematic threshold that can be used to define when the
    differences are noise and when they are significant?
    Or should I just pick a number which is small enough in my mind (say
    epsilon = 1 E-6)?

    Thanks,
    Avi
     
    Avi, May 13, 2008
    #5
  6. Avi

    Avi Guest

    Re: Two implementations of simple math equation yield differentresults

    Thanks for your answers. It clarified the reasons for the differences
    and helped me set a mechanism to distinguish between round off noise
    and significant differences.

    Avi
     
    Avi, May 13, 2008
    #6
  7. Avi

    James Kanze Guest

    Re: Two implementations of simple math equation yield differentresults

    On May 13, 12:15 pm, Greg Herlihy <> wrote:
    > On May 13, 2:42 am, James Kanze <> wrote:


    > > On May 13, 7:16 am, Avi <> wrote:
    > > > double f2 = e1 + e2;
    > > > The difference is calculated as follows:
    > > > double diff = f2-f1;
    > > > I expect the difference to be exactly 0 but instead, I get a
    > > > tiny value. Could someone explain the reason for differences
    > > > in the result?


    > > It's very implementation dependent.


    > > > I'm compiling with gcc


    > > More to the point, you're compiling on a machine which uses
    > > extended precision in its floating point unit, and its floating
    > > point registers. (An Intel, perhaps.)


    > > The reason for this freedom is speed. Forcing the results of
    > > each operation to be rounded to double can be done, but would
    > > slow things down considerably (at least on Intel). And for the
    > > most part, increased precision is not considered a defect.


    > But the extended floating point precision is a defect if - as
    > in this case - consistent numeric results are required.


    By defect, I presume you mean with regards to what is needed,
    not according to the standard. I'm not that good in numerics to
    argue one way or the other. Intuitively, I would agree with
    you, but I know that opinions about this differ amongst the
    specialists.

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
     
    James Kanze, May 14, 2008
    #7
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Replies:
    1
    Views:
    341
    Gabriel Genellina
    Apr 22, 2008
  2. Markus
    Replies:
    1
    Views:
    218
    Mark Hubbart
    Sep 27, 2004
  3. Mike Nicholaides
    Replies:
    3
    Views:
    106
    Mike Nicholaides
    Mar 28, 2008
  4. Michael Edgar
    Replies:
    13
    Views:
    306
    Brian Candler
    Apr 21, 2011
  5. VK
    Replies:
    15
    Views:
    1,337
    Dr J R Stockton
    May 2, 2010
Loading...

Share This Page