C99 float variants of math.h functions

  • Thread starter Gustavo Rondina
  • Start date
G

Gustavo Rondina

Hello CLC.

The C99 standard requires several of the usual math functions to have
both float and long double variants, such as sqrtf, cosf, sinf and so on.
What is the rationale behind the existence of those float variants? For
years programmers have used the usual C89 functions (i.e., double only)
even when working with floats, so why has C99 introduced those variants?
Are there any advantages?

I would guess that using a float specific function would allow the
implementation to take advantage of the some particular single precision
hardware and thus make the function faster, but this is just speculation
from my part.

Regarding the correctness, I suspect that both the regular double
functions and the float variants should return the same results no matter
what, but is it possible to guarantee that? I mean, since the usual C89
functions will perform the operations in double precision and then cast
the result to single precision if one is working with floats, how can one
be sure that the casts to double won't mess with the bits of the input
and output values?

In order to try to answer my doubts concerning this topic I've written
two small programs, which follow.

$ cat test1.c
#include <assert.h>
#include <stdio.h>
#include <math.h>

int main(int argc, char *argv[])
{
const unsigned int n = 10000000;
unsigned int i;
float a, b, c;

a = 3.40282347e+38F; /* max 32 bit float */
for (i = 0; i < n; ++i) {
b = sqrt(a);
c = sqrtf(a);
assert(b == c);
a = b;
}

return 0;
}

This one just checks if the results of both sqrt and sqrtf are exactly
the same using the == operator. I know comparing floats for equality like
this is dangerous, but in this case it does precisely what I want, which
is checking if the results of both functions are exactly the same down to
the last bit.

Compiling and running the program everything seems ok:

$ gcc -Wall -pedantic -std=c99 -o test1 test1.c -lm
$ ./test1
$

So I assume both functions will return the same, but I don't know if it
is just a coincidence on my implementation or if it is always the case.

The second program is a naive way of checking whether there are any
differences between the execution time of both functions.

$ cat test2.c
#include <assert.h>
#include <stdio.h>
#include <math.h>

int main(int argc, char *argv[])
{
const unsigned int n = 200000000;
unsigned int i;
float a, b;

assert(argc == 2);

a = 1.618033987F;
if (*argv[1] == 'f') {
printf("using sqrtf\n");
for (i = 0; i < n; ++i)
b = sqrtf(a);
} else {
printf("using sqrt\n");
for (i = 0; i < n; ++i)
b = sqrt(a);
}

return 0;
}

Compiling and timing the executions:

$ gcc -Wall -pedantic -std=c99 -o test2 test2.c -lm
$ time ./test2 f
using sqrtf

real 0m9.039s
user 0m8.806s
sys 0m0.050s

$ time ./test2 d
using sqrt

real 0m8.656s
user 0m8.459s
sys 0m0.010s


The double version seems to be a little bit faster, but I don't know
exactly why. Again, this probably is just on my implementation, but show
that sqrt and sqrtf may be implemented in different ways, even though
they yield the same results. Is that correct?

I don't know if I made myself clear, but I hope any of you guys can shed
some light on this.

Cheers,
 
T

Tim Prince

Gustavo said:
$ gcc -Wall -pedantic -std=c99 -o test2 test2.c -lm
$ time ./test2 f
using sqrtf

real 0m9.039s
user 0m8.806s
sys 0m0.050s

$ time ./test2 d
using sqrt

real 0m8.656s
user 0m8.459s
sys 0m0.010s

It looks like you asked for 387 code (guessing about your gcc target),
where you are using the same instruction for sqrt() and sqrtf(). If
you intend always to use 387, as you might have up to 9 or 10 years ago,
you may be right, the distinction is moot as far as you are concerned.
People concerned about performance would normally choose sufficient
optimization to get in-line sqrt instructions; no doubt my pointing that
out makes this even less topical for c.l.c.
 
E

Eric Sosman

Gustavo said:
Hello CLC.

The C99 standard requires several of the usual math functions to have
both float and long double variants, such as sqrtf, cosf, sinf and so on.
What is the rationale behind the existence of those float variants? For
years programmers have used the usual C89 functions (i.e., double only)
even when working with floats, so why has C99 introduced those variants?
Are there any advantages?

The authors of "Numerical Recipes in C" write passionately
and bitterly on the topic of C's habit of up-converting everything
to double. They've got some of the details slightly askew and
some of what they say is now out-of-date (and their grasp of C
isn't all that firm), but they're worth a read anyhow. An older
edition of the book is available on-line; you might want to have
a look. Here's one of the spicier bits:

One does not need much experience in scientific computing
to recognize that the implicit conversion rules are, in
fact, sheer madness! In effect, they make it impossible
to write efficient numerical programs. One of the
cultural barriers that separates computer scientists from
"regular" scientists and engineers is a differing point
of view on whether a 30% or 50% loss of speed is worth
worrying about. [...] The practical scientist is trying
to solve tomorrow's problem with yesterday's computer;
the computer scientist, we think, often has it the
other way around.
Regarding the correctness, I suspect that both the regular double
functions and the float variants should return the same results no matter
what, but is it possible to guarantee that?

Why would you expect such a thing? If double has more
precision than float, exp(1) will be unequal to expf(1) --
and that's not a bug, but a Good Thing.

As to your speed test (code snipped; see up-thread), it
seems likely that the square root implementations on your system
use a hardware-provided instruction to compute the root, and
thus show little speed difference. Perhaps a more involved
function like arctangent, say, would show up differently. An
implementation for the coarser precision of float might use
fewer terms in a Taylor series, use a coarser convergence
criterion and loop fewer times, and so on. There's obviously
no guarantee that they *will* do so, but the xxxf() functions
certainly open the door to the possibility.
 
G

Gustavo Rondina

The authors of "Numerical Recipes in C" write passionately
and bitterly on the topic of C's habit of up-converting everything to
double. They've got some of the details slightly askew and some of what
they say is now out-of-date (and their grasp of C isn't all that firm),
but they're worth a read anyhow. An older edition of the book is
available on-line; you might want to have a look. Here's one of the
spicier bits:

[quotation from NR book snipped]

Thanks, I'm checking it right now. The section about accuracy and
stability (1.3 in the 1992 edition) seems relevant to my interests.
Why would you expect such a thing? If double has more
precision than float, exp(1) will be unequal to expf(1) -- and that's
not a bug, but a Good Thing.

I meant when you cast the return of exp(1) to a float. That should be
same of the return value of expf(1), or it depends on implementation
details? Is the way casts from double to floats are made described
somewhere in the C99 standard?

Maybe I'm giving more importance to this issue than I should, but this is
because I was writing this program the other day, which is supposed to
support both single and double precision when I run into the float
variants of the math functions and started to wonder if there is any
practical advantage in using them over the C89 ones.

Anyhoo, thanks for the inputs, for both of you (yourself and Tim Prince).


Cheers,
 
E

Eric Sosman

Gustavo said:
[...]
Why would you expect such a thing? If double has more
precision than float, exp(1) will be unequal to expf(1) -- and that's
not a bug, but a Good Thing.

I meant when you cast the return of exp(1) to a float. That should be
same of the return value of expf(1), or it depends on implementation
details? Is the way casts from double to floats are made described
somewhere in the C99 standard?

I'd expect (float)xxx(z) to be "close to" xxxf(z), but not
necessarily exactly equal. As I mentioned earlier, the two might
use different implementations -- fewer iterations for xxxf(z),
for example -- leading to discrepancies of a few ULP. Also, the
first form could have two roundings where the second has only one,
which could produce a 1-ULP difference all by itself.
 
T

Tim Prince

superpollo said:
mmm... i do not think the libraries ever use taylor expansion, but i
maybe wrong.

Loosely speaking, any polynomial expansion resembles a Taylor series.
One would expect at least Chebyshev economization, so as to reduce the
number of terms required to minimize error in a specified interval.
 
E

Eric Sosman

Tim said:
Loosely speaking, any polynomial expansion resembles a Taylor series.
One would expect at least Chebyshev economization, so as to reduce the
number of terms required to minimize error in a specified interval.

Lots and lots of techniques are used for evaluating
mathematical functions. I used "Taylor series" as a loose
and possibly familiar stand-in for the whole bag of methods:
Taylor polynomials, Chebyshev polynomials, continued fraction
expansions, rational function approximations, and so on. I
did not intend to suggest that "Taylor series" was the literal
be-all and end-all of approximation methods, and apologize if
anyone got that impression.

The point is that whatever method is used, the xxxf()
implementation may use a coarser variant than the corresponding
xxx() does, because the xxxf() version does not need (cannot
deliver) as much precision. It's possible, I suppose, that the
two versions could even use entirely different algorithms with
different error characteristics.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,011
Latest member
AjaUqq1950

Latest Threads

Top