Rounding double

R

Richard

James Kuyper said:
Are you asserting that the specification Jacob just described is
inherently impossible to implement? As far as I can see, the reasons
you've already explained do not apply to this specification.

I was under the impression that the only thing you could say against
this specification is that it doesn't match your own unconventionally
strict interpretation of the OP's specification.

"unconventionally strict". Nice. Euphemism prize of the year for
Heathfield's almost surreal hatred of everything Jacob proposes.
 
D

Dik T. Winter

> Then you divide by the power of ten, and round the result to the nearest
> ulp, then you convert the long double result into
> double and round THAT, to the nearest ulp and return that result.
>
> The idea is that all those roundings are done in HIGHER precision that
> what double offers, and will NOT affect the result.
>
> You have any objection against this supposition?

Yes. Because that supposition is wrong. It can be shown that first
rounding from extended precision to double precision and next to
single precision will not always give the same result as when a
single rounding from extended precision to single precision is
performed. I can show you explicit examples (and they are not so
difficult to find).
 
K

Kai-Uwe Bux

jacob said:
True. I lessen my claim saying that it is the best approximation
presented till now

:)

I don't know how to rewrite this so that it becomes C :-(

Anyway:

#include <stdio.h>
#include <float.h>
#include <math.h>

double roundto_navia (double value, unsigned digits)

{
long double v = value;
long double fv = fabs(value),p = powl(10.0L,digits);
if (fv > powl(10.0L,DBL_DIG) || digits > DBL_DIG)
return value;
return roundl(p*value)/p;
}


#include <sstream>
#include <iomanip>
#include <cmath>
#include <iostream>

template < typename RealType >
RealType roundto_kubux ( RealType value, unsigned digits ) {
if ( value < 0 ) {
return ( - roundto_kubux( -value, digits ) );
}
RealType integral_part;
RealType fractional_part = std::modf( value, &integral_part );
RealType p = pow( 10, digits );
RealType a = fractional_part * p;
RealType b = floor( a );
if ( a - b >= 0.5 ) {
b = ceil( a );
}
return ( integral_part + b / p );
}

#include <limits>
#include <iostream>

int main ( void ) {
double x = 0.1234567890123456789;
double x_navia = roundto_navia( x, 16 );
double x_kubux = roundto_kubux( x, 16 );
std::cout << std::setprecision( std::numeric_limits<double>::digits10 +
3 )
<< x << '\n';
std::cout << std::setprecision( std::numeric_limits<double>::digits10 +
3 )
<< x_navia << '\n';
std::cout << std::setprecision( std::numeric_limits<double>::digits10 +
3 )
<< x_kubux << '\n';
}


yields on my machine:

news_group> cc++ -ffloat-store round_to_001.cc
news_group> a.out
0.123456789012345677
0.123456789012345677
0.123456789012345705

The value was chosen just to make it easy to find the n-th digit after the
decimal point. Of course, for other values, your method might be better,
and for most values, both methods probably yield identical answers.


Best

Kai-Uwe Bux
 
R

Richard

Dik T. Winter said:
Do you not understand what I complain about? It is your remark that
it gives the *best* approximation. This is a wrong claim. I think
that if you actually *want* to return the best approximation, it will
be a lot of work, and it is doubtful whether it will be useful at all.
So your *good* (not *best*) approximation may be optimal in the sense
of effort vs. result.

Could you describe which of the *given* solutions is "better" and why?
 
K

Kenny McCormack

Are you asserting that the specification Jacob just described is
inherently impossible to implement? As far as I can see, the reasons
you've already explained do not apply to this specification.

I was under the impression that the only thing you could say against
this specification is that it doesn't match your own unconventionally
strict interpretation of the OP's specification.

"unconventionally strict". Nice. Euphemism prize of the year for
Heathfield's almost surreal hatred of everything Jacob proposes.[/QUOTE]

Yes, very good! And keep in mind that Heathfield claims to never attach
anyone (and never to be attacked, which this post disproves, twice).
 
R

Richard Heathfield

jacob navia said:
I want to have the distance between point a and point b in meters,
and I have it in millimeters.

But what the OP wants - or rather, what he actually asked for - is to round
the value stored in a double, which is not the same thing at all.
Or
I want to know how many euro cents I have with US$56.87 using
1.4655444 as exchange rate, or WHATEVER problem it is.

You say that the problem is impossible to solve EXACTLY.

It is, and rounding requires an exact solution. 0.33, rounded to one
decimal place, is not 0.30000000000000001 or 0.2999999999999998, but 0.3
exactly. That's what rounding means.
I agree with that. My solution solves it inexactly, i.e. it
gives the (maybe) best approximation to the true value,
as usual with all floating point calculations.

I understand what you're saying. What I'm saying is that your inexact
solution is, by its very nature, inadequate to the task of rounding. That
in itself is not your fault, *because* no exact solution exists. If your
solution is good enough for the OP's requirements, great - but if so, then
the OP's requirements do not match his request.

Who cares about those people?

I do.
There are many freely available C99 compilers.

There are very few conforming C99 compilers. There are several that conform
in many respects to C99, but very few that conform fully.
You for instance, you use
an obsolete version of gcc, and want to stay that way forever, frozen in
some distant past.

I'm quite happy with it. It does all I require. If there comes a point
where GNU offers full C99 conformance, I am very likely to make the
switch. Until then, I don't see much point in switching from a conforming
compiler to a non-conforming compiler.
Your choice. I do not care about those people.

This is obvious. But I care about those people.
I have told you how to fix it

What I have isn't broken. It conforms. I don't plan to swap it for a
compiler that doesn't.
and if you do not want (or you have an employer that forbids you
to upgrade your compiler and persist using the older buggy versions)
that's YOUR problem, not mine. I use standard C.

No, you're using a compiler that implements *some* of the features of C99.
Insofar as your code relies on those features, it is not portable to a
compiler that supports a different subset of C99, in cases where the
features you use are not in that different subset. Furthermore, it may
also render your code non-portable to C++ (this thread is cross-posted to
comp.lang.c++, and we may reasonably presume that the OP has an interest
in C++ or he would not have cross-posted there).
Not some substandard to please Mr Heathfield.

Nor indeed any potential users of your code who can't use it because it
incorporates features that their implementation doesn't support.
Yes. You were lying by omission, and it took some time for everyone to
realize that.

I reject that accusation utterly.
Poor Mr Heathfield! You must have a terrible employer. I am thinking
of starting a petition to him to allow you upgrade your gcc installation
older than 1999

You need to work harder on your sarcasm. "New" doesn't necessarily imply
"good" or "useful" or "more cost-effective than what we already have".
Each site will make these decisions for itself. It is not my job or yours
to tell other people which compiler to use. That's the whole point of
having a standard - so that we can write code that doesn't depend on
features that we can't guarantee to be available on all implementations.
Yeah. You have missed all the bug fixes of gcc for the last 7 years.

I seem to recall that, only a week or two ago, code was posted here that
demonstrated a bug introduced into gcc in a version later than the one I
use here. So not only have I missed the bug fixes, but I've also missed
the bugs. Sounds fair to me.

Yes, stability is worth a lot.
 
R

Richard Heathfield

James Kuyper said:
Are you asserting that the specification Jacob just described is
inherently impossible to implement?

No, I'm only asserting that the original problem, as stated, is impossible
to solve.

<snip>
 
R

Ralf Damaschke

James said:
That should, of course, have been <math.h>. I must not have
been fully awake yet.

Yes, and the atof variable should not have external linkage.
Just above the text you quoted the standard says:
— All identifiers with external linkage in any of the following
subclauses (including the future library directions) are always
reserved for use as identifiers with external linkage.

Ralf
 
R

Ralf Damaschke

[About atof declaration put in math.h in lcc-win]
It depends on the cleverness of such an implementation.
If such implementation does not also have the magic to forget
the extra declaration of atof when stdlib.h was not included,
it violates 7.1.3.
[...]
Use lcc -ansic.

Thanks, I won't. The point was that there actually is something
"written about not putting it in math.h".

Ralf
 
J

James Kuyper

Richard said:
James Kuyper said:

<snip>
[Reinstating relevant snipped text]
No, I'm only asserting that the original problem, as stated, is impossible
to solve.

The original problem as stated was, IMO, probably not intended to be
read with the unconventionally strict interpretation you're using. I
believe that Jacob is correctly describing the problem as clearly
expressed by the OP.

Your interpretation of that request is quite different from the one that
Jacob was providing, and when he challenged anyone to suggest a better
function, he clearly was asking for a better solution to his specified
problem, not for a better solution to your overly strict interpretation
of the OP's request.

Therefore, your comment that you "don't see any point in trying to solve
a problem that was inherently impossible" was irrelevant in context. You
could have wasted time explaining yet again that you don't consider his
specification to be a correct interpretation of the OP's request, but
that's not what you choose to do.

Someone who was expecting relevance would reasonably but incorrectly
interpret your comment as asserting that the problem as Jacob specified
it is impossible to solve. If you insist on throwing in irrelevant
comments, you should be more careful about choosing your words to
prevent them from being misunderstood as if they were relevant.
 
W

Walter Roberson

The original problem as stated was, IMO, probably not intended to be
read with the unconventionally strict interpretation you're using. I
believe that Jacob is correctly describing the problem as clearly
expressed by the OP.

Hard to say, since the OP has not responded to requests for
clarifications. But for whatever it's worth, in my opinion,
Richard's interpretation is more likely to be the correct one.
I base this partly on the two responses that the OP did make,
which did not acknowledge the impossibility of exact rounding
and instead appeared to repeat the request for exact rounding.
 
W

Walter Roberson

I disagree.
The test fv > powl(10.0L, DBL_DIG) ensures that the absolute value
of "value" is less than 10 ^ 16. If "digits" is 15, the maximum
value of the multiplication can be 10 ^ 15 * 10 ^ 15 == 10 ^ 30,
a LOT less than the maximum value of double precision that is
DBL_MAX 1.7976931348623157e+308. Note that the calculations are done
in long double precision and LDBL_MAX 1.18973149535723176505e+4932L
so the overflow argument is even less valid in long double precision.
Other systems LDBL_MAX are even higher since they use 128 bits and not
80 bits as the 80x86. For systems where long double is equal to double
precision, the value is well within range anyway.

[Crossposting to comp.lang.c++ removed as the below deals with
C standards that might differ in C++]

Appendix F of the C99 standard

(Note that compliance with this appendix is optional; when compliance
is present, then __STDC_IEC_559__ will be defined)

F.2 Types
-- the long double type matches an IEC 60559 extended format,
else an non-IEC 60559 extended format, else the IEC 60559 double
format.

Any non-IEC 60559 extended format used for the long double type
shall have more precision than IEC 60559 double and at least
the range of IEC 60559 double.

Therefore, even with strict Appendix F __STDC_IEC_559__ compliance,
LDBL_MAX need not have a range higher than 1.7976931348623157e+308
as long as it has more precision than DBL_MAX.

This does not change your argument about powl(10.0L, DBL_DIG)
being within DBL_MAX and LDBL_MAX, but it does significantly change
the constant you quoted for LDBL_MAX 1.18973149535723176505e+4932L
 
U

user923005

Hi

Does any body know, how to round a double value with a specific number
of digits after the decimal points?

A function like this:

RoundMyDouble (double &value, short numberOfPrecisions)

It then updates the value with numberOfPrecisions after the decimal
point.

This sort of works most of the time:

#include <math.h>
double fround(const double nn, const unsigned d)
{
const long double n = nn;
const long double ld = (long double) d;
return (double) (floorl(n * powl(10.0L, ld) + 0.5L) / powl(10.0L,
ld));
}
#ifdef UNIT_TEST
#include <stdio.h>
#include <stdlib.h>
#include <float.h>
int main(void)
{
double pi = 3.1415926535897932384626433832795;
unsigned digits;
for (digits = 0; digits <= DBL_DIG; digits++) {
printf("Rounding by printf gives: %20.*f\n",
digits, pi);
printf("Rounding approximation by function gives: %20.*f\n",
DBL_DIG, fround(pi, digits));
}
return 0;
}
#endif
/*
C:\tmp>cl /W4 /Ox /DUNIT_TEST round.c
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762
for 80x86
Copyright (C) Microsoft Corporation. All rights reserved.

round.c
Microsoft (R) Incremental Linker Version 8.00.50727.762
Copyright (C) Microsoft Corporation. All rights reserved.

/out:round.exe
round.obj

C:\tmp>round
Rounding by printf gives: 3
Rounding approximation by function gives: 3.000000000000000
Rounding by printf gives: 3.1
Rounding approximation by function gives: 3.100000000000000
Rounding by printf gives: 3.14
Rounding approximation by function gives: 3.140000000000000
Rounding by printf gives: 3.142
Rounding approximation by function gives: 3.142000000000000
Rounding by printf gives: 3.1416
Rounding approximation by function gives: 3.141600000000000
Rounding by printf gives: 3.14159
Rounding approximation by function gives: 3.141590000000000
Rounding by printf gives: 3.141593
Rounding approximation by function gives: 3.141593000000000
Rounding by printf gives: 3.1415927
Rounding approximation by function gives: 3.141592700000000
Rounding by printf gives: 3.14159265
Rounding approximation by function gives: 3.141592650000000
Rounding by printf gives: 3.141592654
Rounding approximation by function gives: 3.141592654000000
Rounding by printf gives: 3.1415926536
Rounding approximation by function gives: 3.141592653600000
Rounding by printf gives: 3.14159265359
Rounding approximation by function gives: 3.141592653590000
Rounding by printf gives: 3.141592653590
Rounding approximation by function gives: 3.141592653590000
Rounding by printf gives: 3.1415926535898
Rounding approximation by function gives: 3.141592653589800
Rounding by printf gives: 3.14159265358979
Rounding approximation by function gives: 3.141592653589790
Rounding by printf gives: 3.141592653589793
Rounding approximation by function gives: 3.141592653589793
*/
 
K

Kaz Kylheku

It is also in stdlib. It is in BOTH, and nothing
is written about not putting it in math.h

Does this text mean anything to you?

``Each identifier with file scope listed in any of the following
subclauses (including the future library directions) is reserved for
use as macro and as an identifier with file scope in the same name
space if any of its associated headers is included.''

The atof name is not reserved for use as a macro if only <math.h> is
included, because <math.h> is not associated with atof. So this is the
start of a valid translation unit:

#define atof (
#include <math.h>

Oops!
 
K

Kaz Kylheku

So, if I write the following strictly conforming code:

#include <stdlib.h>
int atof = 3;

How is this strictly conforming? Because atof is the name of a
standard library function, it's reserved as an identifier with
external linkage (regardless of what header is included).
 
J

James Kuyper

Kaz said:
How is this strictly conforming? Because atof is the name of a
standard library function, it's reserved as an identifier with
external linkage (regardless of what header is included).

I was apparently less than fully awake when I wrote that. It should have
said:

#include <math.h>
static int atof=3;

Ralf Damasche pointed out the same problem more than 5 hours ago. His
own response contained neither mistake.

What makes this mistake even more annoying is that I actually thought
about the external linkage issue, and at least at one point my draft
response contained an identifier with internal linkage. However, I
accidentally dropped that feature during a re-write.
 
J

jacob navia

James said:
I was apparently less than fully awake when I wrote that. It should have
said:

#include <math.h>
static int atof=3;

Ralf Damasche pointed out the same problem more than 5 hours ago. His
own response contained neither mistake.

What makes this mistake even more annoying is that I actually thought
about the external linkage issue, and at least at one point my draft
response contained an identifier with internal linkage. However, I
accidentally dropped that feature during a re-write.

I still do not understand why all this.

I repeated and repeated that if you invoke lcc-win with

lcc -ansic

(flully conforming mode)

atof is NOT included in math.h.

I repeated this several times but nobody takes notice and people
go on.

Then please:

AGAIN

If you invoke lcc-win in fully conforming mode with

lcc -ansic

atof should not be declared in math.h.
 
J

James Kuyper

Harald said:
I agree with what you said, but I have a question about your fragment.
You say it is the start of a valid translation unit. Would it also be a
valid translation unit by itself? In other words, are <math.h>'s contents
required to be implemented (in part) as external-definitions, or might
it, for example, consist entirely of #pragma directives?

Since the consequences of a #pragma directive that doesn't start with
STDC are implementation-defined, just about anything could, in principle
be achieved use of one. However, for a conforming implementation of C,
Adding

#include <math.h>

at file scope must behave as if it sets up declarations for all of the
math.h functions, definitions for some typedefs, and #defines of the
math.h macros.

A translation unit must contain at least one external declaration; the
declarations which math.h must set up cause this translation unit to
meet that requirement.
 
H

Harald van Dijk

The atof name is not reserved for use as a macro if only <math.h> is
included, because <math.h> is not associated with atof. So this is the
start of a valid translation unit:

#define atof (
#include <math.h>

Oops!

I agree with what you said, but I have a question about your fragment.
You say it is the start of a valid translation unit. Would it also be a
valid translation unit by itself? In other words, are <math.h>'s contents
required to be implemented (in part) as external-definitions, or might
it, for example, consist entirely of #pragma directives?

(Cross-post to comp.lang.c++ dropped; I'm asking specifically about C
here.)
 
J

James Kuyper

jacob said:
....

I still do not understand why all this.

I've merely been explaining my mistake, not re-expressing it.
I repeated and repeated that if you invoke lcc-win with

lcc -ansic

> (flully conforming mode)

atof is NOT included in math.h.


All of our negative comments about your movement of atof() to math.h
apply only to the mode in which your comment indicating that you had so
moved it was true. If this is not true about 'lcc -ansic', then our
comments have not been about that mode. Our comments remain true and
relevant about whichever mode(s) of lcc your comment was true of. In
that mode, lcc is NOT ISO C compatible.

If you didn't intend for that mode to be ISO C compatible, why would it
bother you to have someone point out a new way in which it fails to be
ISO C compatible? If you did intend that mode to be ISO C compatible,
you should fix it. Either way, I don't see any just grounds for
complaint about that comment.

Also, if the mode in which math.h does declare atof() is NOT intended to
be ISO C compatible, what did you mean when you said that "nothing is
written about not putting it in math.h"? What document other than ISO C
were you referring to?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top