std::norm defect?

H

highegg

hello,

the C++ standard defines std::norm (const std::complex<T>& x) to
return "the squared magnitude of x".
Earlier in the same section, std::abs is defined as returning "the
magnitude of x".
This wording is a little unfortunate, as it may suggest that the
std::norm(x) should be equal to std::abs(x) squared.
To my disappointment, this is what current GCC does, referring to the
standard as a reasoning.
I think that the intent of std::norm was clearly to provide a
convenient way to efficiently calculate squared magnitude *without*
the need to calculate a square root (and then square back). Hence, GCC
has turned a potentially useful function into a useless one.
What do you think? Did the author misinterpret the standard, or did he
take it too literally?
Is it GCC or the standard that should be fixed? Perhaps there could be
a clarification in the standard?

regards

Jaroslav Hajek
 
D

DerTopper

hello,

the C++ standard defines std::norm (const std::complex<T>& x) to
return "the squared magnitude of x".
Earlier in the same section, std::abs is defined as returning "the
magnitude of x".
This wording is a little unfortunate, as it may suggest that the
std::norm(x) should be equal to std::abs(x) squared.
To my disappointment, this is what current GCC does, referring to the
standard as a reasoning.
I think that the intent of std::norm was clearly to provide a
convenient way to efficiently calculate squared magnitude *without*
the need to calculate a square root (and then square back). Hence, GCC
has turned a potentially useful function into a useless one.

Well, any sapient being would probably agree to your statement, except
for the Vogons ;-)
What do you think? Did the author misinterpret the standard, or did he
take it too literally?
Is it GCC or the standard that should be fixed? Perhaps there could be
a clarification in the standard?

I think gcc has got it wrong and should be fixed. The standard merely
says that std::norm should return the squared value of what std::abs
returns, whereas it does not say that std::norm should be implemented
by squaring the result of std::abs. I'm a bit shocked that such a
misconception could creep into gcc.

Regards,
Stuart
 
A

Alf P. Steinbach

* highegg:
hello,

the C++ standard defines std::norm (const std::complex<T>& x) to
return "the squared magnitude of x".
Earlier in the same section, std::abs is defined as returning "the
magnitude of x".
This wording is a little unfortunate, as it may suggest that the
std::norm(x) should be equal to std::abs(x) squared.
To my disappointment, this is what current GCC does, referring to the
standard as a reasoning.
I think that the intent of std::norm was clearly to provide a
convenient way to efficiently calculate squared magnitude *without*
the need to calculate a square root (and then square back). Hence, GCC
has turned a potentially useful function into a useless one.
What do you think? Did the author misinterpret the standard, or did he
take it too literally?
Is it GCC or the standard that should be fixed? Perhaps there could be
a clarification in the standard?

It seems to be just a Quality Of Implementation issue.


Cheers & hth.,

- Alf
 
K

Kai-Uwe Bux

highegg said:
hello,

the C++ standard defines std::norm (const std::complex<T>& x) to
return "the squared magnitude of x".
Earlier in the same section, std::abs is defined as returning "the
magnitude of x".
This wording is a little unfortunate, as it may suggest that the
std::norm(x) should be equal to std::abs(x) squared.
To my disappointment, this is what current GCC does, referring to the
standard as a reasoning.

The actual reasoning (taken from the complex header):

// 26.2.7/5: norm(__z) returns the squared magnitude of __z.
// As defined, norm() is -not- a norm is the common mathematical
// sens used in numerics. The helper class _Norm_helper<> tries to
// distinguish between builtin floating point and the rest, so as
// to deliver an answer as close as possible to the real value.

Then it goes on to use template magic and implement norm() as abs()^2 for
builtin types.

It appears that numerical accuracy is the reason behind this design
decision, not the wording in the standard you quoted.

I think that the intent of std::norm was clearly to provide a
convenient way to efficiently calculate squared magnitude *without*
the need to calculate a square root (and then square back). Hence, GCC
has turned a potentially useful function into a useless one.
Huh?


What do you think? Did the author misinterpret the standard, or did he
take it too literally?

Neither. The standard is clearly correct in specifying the semantics of
std::norm(). Any implementation is _free_ to realize that semantics in any
way it deems fit. It just so happens that the library implementors for gcc
thaught it would be better to define norm() in terms of abs(). They are
clearly _not_ forced by the standard to do so. I also think that they do
know that.

Is it GCC or the standard that should be fixed?

I wonder whether gcc actually needs fixing in this regard. Note that
std::abs() is _not_ implemented as sqrt( z * conj(z) ). That implementation
runs the risk of avoidable overflows. So, abs() does something smart.
Similarly, it is conceivable that std::norm is implemented as sqr(abs(z))
to avoid overflows that could be triggered by z * conj(z) or to achieve
better accuracy (this way, one could be smart in abs() and norm() would
benefit from that without further ado).

What other algorithm for norm() do you have in mind? Maybe, it would be an
interesting exercise to do the numerical analysis for both of them and see
which one is better with respect to various metrics.

Perhaps there could be a clarification in the standard?

The standard is fine as is. This is a quality of implementation issue, which
the standard _should_ not address.


Best

Kai-Uwe Bux
 
S

SG

hello,

the C++ standard defines std::norm (const std::complex<T>& x) to
return "the squared magnitude of x".
Earlier in the same section, std::abs is defined as returning "the
magnitude of x".
This wording is a little unfortunate, as it may suggest that the
std::norm(x) should be equal to std::abs(x) squared.

.... and it should be equal to that. But this doesn't imply an
implementaion of std::norm that calls std::abs and squares the result.
To my disappointment, this is what current GCC does, referring to the
standard as a reasoning.

Wow! -- Unexpected.
I think that the intent of std::norm was clearly to provide a
convenient way to efficiently calculate squared magnitude *without*
the need to calculate a square root (and then square back).

Of, course.
Hence, GCC
has turned a potentially useful function into a useless one.

It certainly seems that way.

Also, the name "norm" was a really bad idea because the squared
magnitude doesn't satisfy the triangle inequality. So, "std::norm" is
NOT a norm in the mathematical sense.

Cheers!
SG
 
H

highegg

The actual reasoning (taken from the complex header):

  // 26.2.7/5: norm(__z) returns the squared magnitude of __z.
  //     As defined, norm() is -not- a norm is the common mathematical
  //     sens used in numerics.  The helper class _Norm_helper<> tries to
  //     distinguish between builtin floating point and the rest, so as
  //     to deliver an answer as close as possible to the real value.

Then it goes on to use template magic and implement norm() as abs()^2 for
builtin types.

It appears that numerical accuracy is the reason behind this design
decision, not the wording in the standard you quoted.


Huh?

OK, I correct that: "useless from a performance point of view". Can
you think of a real-life application std::norm where you *need* it to
be exactly the square of std::abs?
Neither. The standard is clearly correct in specifying the semantics of
std::norm(). Any implementation is _free_ to realize that semantics in any
way it deems fit. It just so happens that the library implementors for gcc
thaught it would be better to define norm() in terms of abs(). They are
clearly _not_ forced by the standard to do so. I also think that they do
know that.

Why do you think so? Why they referenced the standard at that point,
then?
I wonder whether gcc actually needs fixing in this regard. Note that
std::abs() is _not_ implemented as sqrt( z * conj(z) ). That implementation
runs the risk of avoidable overflows. So, abs() does something smart.
Similarly, it is conceivable that std::norm is implemented as sqr(abs(z))
to avoid overflows that could be triggered by z * conj(z) or to achieve
better accuracy (this way, one could be smart in abs() and norm() would
benefit from that without further ado).

Exactly what overflow issues you avoid, using this implementation of
std::norm? I don't see any.
If x.real () * x.real () + x.imag () * x.imag () overflows, then the
mathematical result just does not fit into the floating point range.
What other algorithm for norm() do you have in mind? Maybe, it would be an
interesting exercise to do the numerical analysis for both of them and see
which one is better with respect to various metrics.

The simplest one, see above. I wonder what numerical defects you can
find.
 
H

highegg

... and it should be equal to that.  But this doesn't imply an
implementaion of std::norm that calls std::abs and squares the result.

Actually, it sort of does, if you take it literally. The
straightforward calculation (and probably intended by the standard)
as (apologies for the Fortran exponentiation) x.real () ** 2 + x.imag
() ** 2 may be a few units off std::abs(x) ** 2.
 
Z

Zeppe

highegg wrote [02/03/09 07:39]:
hello,

the C++ standard defines std::norm (const std::complex<T>& x) to
return "the squared magnitude of x".
Earlier in the same section, std::abs is defined as returning "the
magnitude of x".
This wording is a little unfortunate, as it may suggest that the
std::norm(x) should be equal to std::abs(x) squared.

the result has to.
To my disappointment, this is what current GCC does, referring to the
standard as a reasoning.

I'm afraid the implementers of GCC are not *so* dumb. What it says is
// [...] The helper class _Norm_helper<> tries to
// distinguish between builtin floating point and the rest, so as
// to deliver an answer as close as possible to the real value.

the reason why for built-in types is equal to abs(x) squared for
floating-point precision issues. Basically, I'd say that the
implementation with abs introduces a sequence point. This should
guarantee that abs(x)*abs(x) == norm(x) holds.

If you look at the actual implementation, additionally, you'll see that
the implementation that uses abs is used whenever:
__is_floating<_Tp>::__value & !_GLIBCXX_FAST_MATH
which means that you can manually force the optimised version if you
wish (I guess that a significant advantage can be achieved only in very
rare and extreme cases).

Best wishes,

Zeppe
 
K

Kai-Uwe Bux

highegg said:
OK, I correct that: "useless from a performance point of view". Can
you think of a real-life application std::norm where you *need* it to
be exactly the square of std::abs?

The point will be revealed as moot below, but conceptually it is up to the
implementation whether it goes for accuracy or performance. Many people
also are willing to sacrifice a few bits of accuracy in sin() for faster
execution. Others would like the implementation to be as precise as
possible. Optimally, the user would get to chose (e.g., by some policy
based design).

The point is moot since the tradeoff doesn't seem to apply in this case.

Why do you think so? Why they referenced the standard at that point,
then?

I gave the full quote above. To me, that does not seem like the author is
under the impression that norm() has to be implemented as the square of
abs(). The main concern seems to be numerical quality (although, see
below).

Besides, quoting the standard is not all that unusual. From the same header:

// 26.2.8/1 cos(__z): Returns the cosine of __z.


Exactly what overflow issues you avoid, using this implementation of
std::norm? I don't see any.
If x.real () * x.real () + x.imag () * x.imag () overflows, then the
mathematical result just does not fit into the floating point range.

Agreed, I wasn't thinking straight.

The simplest one, see above. I wonder what numerical defects you can
find.

I did an experiment, and now I agree that

x.real () * x.real () + x.imag () * x.imag ()

also seems to be numerically best (at least it is the most stable when
passing from double to long double). That leaves me somewhat puzzled as to
what might be the reason behind the line from the rationale:

... so as to deliver an answer as close as possible to the real value.

Because, that goal does not seem to be achieved by the current
implementation.

As I see, you have filed a bug report. I hope, the discussion there will
shed some light on the motivation for this choice.



Best

Kai-Uwe Bux
 
H

highegg

highegg wrote [02/03/09 07:39]:
the C++ standard defines std::norm (const std::complex<T>& x) to
return "the squared magnitude of x".
Earlier in the same section, std::abs is defined as returning "the
magnitude of x".
This wording is a little unfortunate, as it may suggest that the
std::norm(x) should be equal to std::abs(x) squared.

the result has to.

Why? Please be specific. The standard does not say it explicitly.
To my disappointment, this is what current GCC does, referring to the
standard as a reasoning.

I'm afraid the implementers of GCC are not *so* dumb. What it says is
   //     [...] The helper class _Norm_helper<> tries to
   //     distinguish between builtin floating point and the rest, so as
   //     to deliver an answer as close as possible to the real value.

the reason why for built-in types is equal to abs(x) squared for
floating-point precision issues. Basically, I'd say that the
implementation with abs introduces a sequence point. This should
guarantee that abs(x)*abs(x) == norm(x) holds.

But why is that needed?
If you look at the actual implementation, additionally, you'll see that
the implementation that uses abs is used whenever:
        __is_floating<_Tp>::__value & !_GLIBCXX_FAST_MATH

OK, I overlooked that one; thanks for pointing it out. Still, I think
gcc is pointlessly (is that an English word?) decreasing performance
if ffast-math is not on (it's not on even with -O3).
which means that you can manually force the optimised version if you
wish (I guess that a significant advantage can be achieved only in very
rare and extreme cases).

I'm afraid you're completely wrong with this guess. I brought it up
for a reason - when optimizing sumsq in Octave 3.2. Working at -O2,
replacing std::norm by a hand-written implementation aka the above
speeds up the complex sumsq by a factor of 8.3, which is not what I
call negligible.

thank you for your comments
 
H

highegg

The point will be revealed as moot below, but conceptually it is up to the
implementation whether it goes for accuracy or performance. Many people
also are willing to sacrifice a few bits of accuracy in sin() for faster
execution. Others would like the implementation to be as precise as
possible. Optimally, the user would get to chose (e.g., by some policy
based design).

The point is moot since the tradeoff doesn't seem to apply in this case.

I agree with this. I assure you I forced myself to think hard what
numerical advantages does it bring, but I just couldn't see any.

I gave the full quote above. To me, that does not seem like the author is
under the impression that norm() has to be implemented as the square of
abs(). The main concern seems to be numerical quality (although, see
below).

Besides, quoting the standard is not all that unusual. From the same header:

  // 26.2.8/1 cos(__z):  Returns the cosine of __z.

OK, quoting the standard may be just a habit. But why on earth did
they implement it that way?

Agreed, I wasn't thinking straight.



I did an experiment, and now I agree that

  x.real () * x.real () + x.imag () * x.imag ()

also seems to be numerically best (at least it is the most stable when
passing from double to long double). That leaves me somewhat puzzled as to
what might be the reason behind the line from the rationale:

   ... so as to deliver an answer as close as possible to the real value.

Because, that goal does not seem to be achieved by the current
implementation.

As I see, you have filed a bug report. I hope, the discussion there will
shed some light on the motivation for this choice.

Wow, what a fast discovery :)
It's the second one in a week (and I found a bug in Intel C++
yesterday, too). I see Octave sources (which is where this comes from)
are becoming a nice testbed for compilers...

anyway, thanks for your comments

highegg
 
Z

Zeppe

highegg wrote [02/03/09 10:48]:
highegg wrote [02/03/09 07:39]:
hello,
the C++ standard defines std::norm (const std::complex<T>& x) to
return "the squared magnitude of x".
Earlier in the same section, std::abs is defined as returning "the
magnitude of x".
This wording is a little unfortunate, as it may suggest that the
std::norm(x) should be equal to std::abs(x) squared.
the result has to.

Why? Please be specific. The standard does not say it explicitly.

If, by your assertion, std::norm returns the squared magnitude of x, and
std::abs returns the magnitude of x, the square of the result of
std::abs is by definition the return value of std::norm.
To my disappointment, this is what current GCC does, referring to the
standard as a reasoning.
I'm afraid the implementers of GCC are not *so* dumb. What it says is
// [...] The helper class _Norm_helper<> tries to
// distinguish between builtin floating point and the rest, so as
// to deliver an answer as close as possible to the real value.

the reason why for built-in types is equal to abs(x) squared for
floating-point precision issues. Basically, I'd say that the
implementation with abs introduces a sequence point. This should
guarantee that abs(x)*abs(x) == norm(x) holds.

But why is that needed?

it's, at least, desirable to have better consistency in numerical
results. For example, to avoid differential errors due to internal
approximations. Some bugs are introduced by internal precision
optimisations that are difficult to spot. I'm not sure if it's needed or
not, I'm not aware of the slight details in the IEEE floating point
standards and in the C/C++ standards for it. However, the fact that in
Matlab such an operator is not even offered makes me wonder.
OK, I overlooked that one; thanks for pointing it out. Still, I think
gcc is pointlessly (is that an English word?) decreasing performance
if ffast-math is not on (it's not on even with -O3).

it's an English word :) For the choice, it's a design one, there are
advantages and disadvantages liked to that. In the ffast-math
documentation, it states:

"" This option is not turned on by any -O option since it can result in
incorrect output for programs which depend on an exact implementation of
IEEE or ISO rules/specifications for math functions. It may, however,
yield faster code for programs that do not require the guarantees of
these specifications. ""

I guess this is the case.

I'm afraid you're completely wrong with this guess. I brought it up
for a reason - when optimizing sumsq in Octave 3.2. Working at -O2,
replacing std::norm by a hand-written implementation aka the above
speeds up the complex sumsq by a factor of 8.3, which is not what I
call negligible.

OK, the advantage is clear on the operation, but I was arguing on how
many real programs achieve a substantial boost from it. And in how many
I just expect norm(x)-abs(x)*abs(x) to return 0.

Best wishes,

Zeppe
 
S

SG

Kai-Uwe Bux said:
I did an experiment, and now I agree that

  x.real () * x.real () + x.imag () * x.imag ()

also seems to be numerically best (at least it is the most stable when
passing from double to long double).

.... compared to the square of the square root of the same thing? Of
course it is. No need for experiments. When I need 'x' I don't write
sqrt(x)*sqrt(x).

--------8<--------
the reason why for built-in types is equal to abs(x) squared for
floating-point precision issues.

Explain. -- Because it doesn't make any sense to me.
Basically, I'd say that the implementation with abs introduces a
sequence point. This should guarantee that abs(x)*abs(x) == norm(x)
holds.

How's that desirable? Who cares about that equality? The standard says
std::norm returns the squared *magnitude*. It doesn't say std::norm
returns the square of the result of std::abs. Since std::abs itself is
an approximation of the magnitude why even bother to use its result?

The closest thing to the squared magnitude is the sum of real()
squared and imag() squared. Not only that, but it's also faster than
computing the magnitude first and then squaring it.


Cheers!
SG
 
K

Kai-Uwe Bux

SG said:
... compared to the square of the square root of the same thing?

No. Here is what I did:

template < typename T >
T norm_a ( T real, T imag ) {
return ( sqr( abs( real, imag ) ) );
}

template < typename T >
T norm_b ( T real, T imag ) {
if ( real < 0 ) { real = -real; }
if ( imag < 0 ) { imag = -imag; }
if ( imag < real ) {
return ( real * real * ( 1.0 + sqr( imag/real ) ) );
} else {
return ( imag * imag * ( 1.0 + sqr( real/imag ) ) );
}
}

template < typename T >
T norm_c ( T real, T imag ) {
return ( sqr( real ) + sqr ( imag ) );
}

void print_diff ( double real, double imag ) {
long double l_real = real;
long double l_imag = imag;
std::cout
<< norm_a( real, imag ) - norm_a( l_real, l_imag ) << '\n'
<< norm_b( real, imag ) - norm_b( l_real, l_imag ) << '\n'
<< norm_c( real, imag ) - norm_c( l_real, l_imag ) << "\n\n";
}

Then, I used print_diff to figure out for which algorithm the result value
changes the least by increasing the internal precision of the computation
from double to long double.


Best

Kai-Uwe Bux
 
H

highegg

highegg wrote [02/03/09 10:48]:
highegg wrote [02/03/09 07:39]:
hello,
the C++ standard defines std::norm (const std::complex<T>& x) to
return "the squared magnitude of x".
Earlier in the same section, std::abs is defined as returning "the
magnitude of x".
This wording is a little unfortunate, as it may suggest that the
std::norm(x) should be equal to std::abs(x) squared.
the result has to.
Why? Please be specific. The standard does not say it explicitly.

If, by your assertion, std::norm returns the squared magnitude of x, and
std::abs returns the magnitude of x, the square of the result of
std::abs is by definition the return value of std::norm.

That is exactly what I meant by taking the standard too literally.
std::abs does not, and can not, return the magnitude of x, because in
most cases the magnitude of x is not representable. You should
interpret the standard here as "return an approximation to..." because
nothing else makes sense. With that interpretation, of course, you
lose your reasoning.
To my disappointment, this is what current GCC does, referring to the
standard as a reasoning.
I'm afraid the implementers of GCC are not *so* dumb. What it says is
   //     [...] The helper class _Norm_helper<> tries to
   //     distinguish between builtin floating point and the rest, so as
   //     to deliver an answer as close as possible to the real value.
the reason why for built-in types is equal to abs(x) squared for
floating-point precision issues. Basically, I'd say that the
implementation with abs introduces a sequence point. This should
guarantee that abs(x)*abs(x) == norm(x) holds.
But why is that needed?

it's, at least, desirable to have better consistency in numerical
results. For example, to avoid differential errors due to internal
approximations. Some bugs are introduced by internal precision
optimisations that are difficult to spot. I'm not sure if it's needed or
not, I'm not aware of the slight details in the IEEE floating point
standards and in the C/C++ standards for it. However, the fact that in
Matlab such an operator is not even offered makes me wonder.


OK, I overlooked that one; thanks for pointing it out. Still, I think
gcc is pointlessly (is that an English word?) decreasing performance
if ffast-math is not on (it's not on even with -O3).

it's an English word :) For the choice, it's a design one, there are
advantages and disadvantages liked to that. In the ffast-math
documentation, it states:

""  This option is not turned on by any -O option since it can result in
incorrect output for programs which depend on an exact implementation of
IEEE or ISO rules/specifications for math functions. It may, however,
yield faster code for programs that do not require the guarantees of
these specifications. ""

I guess this is the case.

I don't think so. The implementation that is turned on by ffast-math
is not violating anything.
But this is what this dispute is all about.
OK, the advantage is clear on the operation, but I was arguing on how
many real programs achieve a substantial boost from it. And in how many

You can find numerous applications for sumsq. Nearest neighbour
identification, for instance. Not commonly done in complex spaces,
admittedly.
It's difficult to show real-life examples on demand, but I think most
Octave's user would agree that a factor 8.3 speed-up for a core
reduction function is worth the trouble, even if they don't use it
every other day.
I just expect norm(x)-abs(x)*abs(x) to return 0.

You can only expect that if it is defined that way.
 
S

SG

No. Here is what I did:

template < typename T >
T norm_a ( T real, T imag ) {
  return ( sqr( abs( real, imag ) ) );
}

What does "abs" do (taking two parameters)?

Is it something like

template<typename T>
T abs(T real, T imag)
{
using std::sqrt;
return sqrt(sqr(real)+sqr(imag));
}

for some function "sqr" you seem to have defined somewhere?
template < typename T >
T norm_b ( T real, T imag ) {
  if ( real < 0 ) { real = -real; }
  if ( imag < 0 ) { imag = -imag; }
  if ( imag < real ) {
    return ( real * real * ( 1.0 + sqr( imag/real ) ) );
  } else {
    return ( imag * imag * ( 1.0 + sqr( real/imag ) ) );
  }
}

Why so complicated? It doesn't solve any overflow/underflow issues
nor can it be numerically more accurate than this one ...
template < typename T >
T norm_c ( T real, T imag ) {
  return ( sqr( real ) + sqr ( imag ) );
}

Finally! :)


Cheers!
SG
 
K

Kai-Uwe Bux

SG said:
What does "abs" do (taking two parameters)?

Is it something like

template<typename T>
T abs(T real, T imag)
{
using std::sqrt;
return sqrt(sqr(real)+sqr(imag));
}

No, it's not like that. But it does compute the abs() of a complex value
passed as two real numbers.
for some function "sqr" you seem to have defined somewhere?
Yes.


Why so complicated?

It essentially sqr( abs() ) manually inlined.
It doesn't solve any overflow/underflow issues
nor can it be numerically more accurate than this one ...


Finally! :)

If you can somehow percive accuracy of algorithms with the naked eye, more
power to you. I need to convince myself in one way or the other. The
algorithm norm_b was there _for comparison_. There would be no point to
compare norm_c to nothing else and then declare it the winner.


Best

Kai-Uwe Bux
 
S

SG

No, it's not like that. But it does compute the abs() of a complex value
passed as two real numbers.

Ok. Looking at norm_b which is supposed to be a combined abs+norm your
abs fucntion tries to prevent underflows/overflows.

Note that if the result of abs is squared any underflow/overflow
prevention was pretty pointless.
It essentially sqr( abs() ) manually inlined.


real * real * ( 1.0 + sqr( imag/real ) )

is -- in terms of accuracy -- not much different to

real * real + real * real * sqr( imag/real )

where

real * real * sqr( imag/real )

is a complicated way of writing

imag * imag

but it should be obvious that imag*imag is numerically more accurate.


Cheers!
SG
 
K

Kai-Uwe Bux

SG said:
Ok. Looking at norm_b which is supposed to be a combined abs+norm your
abs fucntion tries to prevent underflows/overflows.

Note that if the result of abs is squared any underflow/overflow
prevention was pretty pointless.



real * real * ( 1.0 + sqr( imag/real ) )

is -- in terms of accuracy -- not much different to

real * real + real * real * sqr( imag/real )

where

real * real * sqr( imag/real )

is a complicated way of writing

imag * imag

but it should be obvious that imag*imag is numerically more accurate.

If it's obvious to you that norm_c is numerically better than norm_b, fine.
It wasn't obvious to me. The experiment, I described provides evidence for
the hypothesis. As I said, if you can do without the experiment then the
more power to you. But why do you keep arguing that it should have been
obvious to me? It wasn't, and that's it. It happens all the time.
Fortunately, I have learned to live with my shortcomings and to seek some
evidence when things are not clear to me.


Best

Kai-Uwe Bux
 
S

SG

If it's obvious to you that norm_c is numerically better than norm_b, fine.
It wasn't obvious to me.

That's why I tried to give reasons for that. I'm sorry if it sounded
condescending.
But why do you keep arguing that it should have been
obvious to me? It wasn't, and that's it.

That's not what I tried to communicate. I tried to present an
explanation for it. I guess I failed.

I merely said that the last part of it should be obvious. The last
part of the reasoning compared
real * real * sqr( imag/real )
to
imag * imag


Cheers!
SG
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top