Square root of a number.

R

Richard Heathfield

pete said:
square_root(10) freezes my computer.

Cool. :)

Incidentally, it doesn't freeze mine. It settles on 3.162277660168... in a
mere 57 iterations.

I'm curious - does it really freeze the whole box? I'm quite impressed. I'm
not aware of any undefined behaviour in the code I presented (which I only
wrote to demonstrate that the 1 to N technique could be made to work with 0
too), but it was just a scribble really.
 
P

pete

Richard said:
pete said:


Cool. :)

Incidentally, it doesn't freeze mine.
It settles on 3.162277660168... in a
mere 57 iterations.

I'm curious - does it really freeze the whole box?

No, but diff never gets any lower than 1.776357e-015
while DBL_EPSILON is 2.220446e-016.
 
P

pete

Richard said:
pete said:


Prove a point, is all.

square_root doesn't come anywhere near to
finding the square root of any number less than one,
except zero.

square_root is handling zero as a special case of sloppy epsilon.

printf("square_root(0.25) is %f.\n\n",
square_root(0.25));

When square_root tries to find the square root of 0.25,
diff settles in at 0.1875,
while the root variable holds at a value of 0.25.
 
B

Ben Bacarisse

Knowing that the square root of N is bewteen N an one is useful for
calculating the square root of positive N, but it is not useful for
finding the square root of zero.

The square root of zero can't be found by searching between N and one,
in the same way that all the other roots can.

Surely not:

double sq_rt(double x)
{
if (x > 0) {
double hi = 1, lo = 1;
if (x > 1)
hi = x;
else
lo = x;
while (hi - lo > FLT_EPSILON) {
double mid = lo + (hi - lo)/2;
if (mid * mid > x)
hi = mid;
else
lo = mid;
}
return hi;
}
return x;
}

What matters is knowing if x < 1 or x > 1. Zero is not a special case.
 
P

pete

Ben said:
Surely not:

double sq_rt(double x)
{
if (x > 0) {
double hi = 1, lo = 1;
if (x > 1)
hi = x;
else
lo = x;
while (hi - lo > FLT_EPSILON) {
double mid = lo + (hi - lo)/2;
if (mid * mid > x)
hi = mid;
else
lo = mid;
}
return hi;
}
return x;
}

What matters is knowing if x < 1 or x > 1.
Zero is not a special case.

That works by virtue of sloppy epsilon, which is to say that
it doesn't really work at all for small numbers near zero
like DBL_MIN,
and the fact that it does work at zero, is a discontinuity.

/* BEGIN sq_rt.c output */

DBL_MIN is 2.225074e-308.

sq_rt(DBL_MIN) is 1.192093e-007.

good_sqrt(DBL_MIN) is 1.491668e-154.

/* END sq_rt.c output */

I'm unfamiliar with any good square root algorithms
that make use of knowing whether
x is greater than or less than one.

/* BEGIN sq_rt.c */

#include<stdio.h>
#include <float.h>

double sq_rt(double x);
double good_sqrt(double x);

int main(void)
{
puts("/* BEGIN sq_rt.c output */\n");
printf("DBL_MIN is %e.\n\n",
DBL_MIN);
printf("sq_rt(DBL_MIN) is %e.\n\n",
sq_rt(DBL_MIN));
printf("good_sqrt(DBL_MIN) is %e.\n\n",
good_sqrt(DBL_MIN));
puts("/* END sq_rt.c output */");
return 0;
}

double sq_rt(double x)
{
if (x > 0) {
double hi = 1, lo = 1;
if (x > 1)
hi = x;
else
lo = x;
while (hi - lo > FLT_EPSILON) {
double mid = lo + (hi - lo)/2;
if (mid * mid > x)
hi = mid;
else
lo = mid;
}
return hi;
}
return x;
}

double good_sqrt(double x)
{
if (x > 0) {
const double a = x;
double b = x / 2 + 0.5;

do {
x = b;
b = (a / x + x) / 2;
} while (x > b);
}
return x;
}

/* END sq_rt.c */
 
R

Richard Heathfield

pete said:
square_root doesn't come anywhere near to
finding the square root of any number less than one,
except zero.

Yes, but it handles 0 just fine. :)

I'll accept that my testing on /this/ problem wasn't up to much - but then I
don't have much spare processing power, because my machine is busy right
now conducting a different test - a program containing an infinite loop.
(It hasn't stopped yet, so I guess it's looking good.)
 
B

Ben Bacarisse

That works by virtue of sloppy epsilon, which is to say that it doesn't
really work at all for small numbers near zero like DBL_MIN,
and the fact that it does work at zero, is a discontinuity.

I should not have posted code since that opened me up to this! Code is
not a good way to make a mathematical point -- I wanted to make the point
that zero is not an exception at all. Of course one can make it an
exception (as you do) but it is not logically an exceptional case for an
algorithm that looks for a square root of N between 1 and N (as your
haiku stated). It, of course, a boundary case and many algorithms behave
oddly or badly near it.

I would have thought (though as you can all see I am no numerical analyst)
that an "industrial" sqrt function would want to fiddle with the mantissa
and exponent directly, particularly for small positive numbers because for
these you ether have to be sloppy with epsilon (lovely phrase) as I was or
do a whole lot of floating point ops as you do.
 
G

Gregory Pietsch

Hi!

Could anyone tell me how to find the square root of a number without
using the sqrt function. I did it by using Newton's Formula. How can it
be done by using the Binomial Theorem/Taylor Series? Is there any other
way of doing it rather than using series?

Thank you,

Priyam

Here's one method:

1) Classify the number. If it's a NaN, an infinite, zero, or negative,
treat them as special cases.
2) Split the number using frexp() into a float in the range [0.5,1.0)
(call it f) and an exponent (call it e).
3) Compute (-0.1984742 * f + 0.8804894) * f + 0.3176687.
4) If e is odd and positive, multiply the result of 3) by an
approximation of the square root of 2.
5) If e is odd and negative, divide the result of 3) by an
approximation of the square root of 2.
6) Scale the result of 3) as amended by 4) or 5) by e / 2 using
ldexp().
7) Use Newton's method -- divide and average -- as many times as
necessary to get more precision.

Gregory Pietsch
 
C

Chris Torek

I would have thought (though as you can all see I am no numerical analyst)
that an "industrial" sqrt function would want to fiddle with the mantissa
and exponent directly, particularly for small positive numbers because for
these you ether have to be sloppy with epsilon (lovely phrase) as I was or
do a whole lot of floating point ops as you do.

Indeed, taking the number apart (into 1.bbb...b for some series of
bits denoted by the "b"s, and then an exponent "e") is quite helpful.
If e is even, we need only calculate sqrt(1.bbb...b) and set a new
exponent e/2. If e is odd, we can either use (e-1)/2 or (e+1)/2
as the new exponent, and either double or halve the 1.bbb...b number
before taking the square root. In the first case, we need only
compute sqrt(x) for 1 <= x < 2, and in the second, we need only
compute sqrt(x) for either 0.5 <= x < 1 (using e+1) or 2 <= x < 4
(using e-1). These ranges for x tend to behave nicely, and even
if we use the lattermost range -- the (e-1)/2 method for odd
exponents -- the output from the sqrt(x) operation is already a
normalized number.

Of course, if we do this with an IEEE-style system, we have to
check for zero, infinity, and NaN first; and then we have to handle
denormals by normalizing them (using a wider-range exponent before
halving as above). (Note that sqrt(denorm) is always a normalized
number.)
 
C

CBFalconer

Thank you!!!

For what, and to whom? Lacking context your message is pointless.
Read the following sig and the referenced URLs before posting
again.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,044
Latest member
RonaldNen

Latest Threads

Top