How to use long double floating?

B

Bryan Parkoff

The guideline says to use %f in printf() function using the keyword
float and double.

For example

float a = 1.2345;
double b = 5.166666667;

printf("%.2f\n %f\n", a, b);

Now, I want to use long double rather than double to get accuracy
decimal number which it is larger than normal number such as billion
numbers. I assume that it would be %dd, but it is not correct. Can you
please provide the information which it support long double, please?
Thanks...

Bryan Parkoff
 
V

Victor Bazarov

Bryan said:
The guideline says to use %f in printf() function using the keyword
float and double.

Use %f in printf family of functions to output a value of type 'float'
or 'double'. The reason the same specifier works for both is that any
value of type 'float' is promoted to 'double' anyway.
For example

float a = 1.2345;
double b = 5.166666667;

printf("%.2f\n %f\n", a, b);

Now, I want to use long double rather than double to get accuracy
decimal number which it is larger than normal number such as billion
numbers. I assume that it would be %dd, but it is not correct. Can
you please provide the information which it support long double,
please? Thanks...


The Standard specifies that you should use 'L' width prefix (like

%.2Lf

) to output a 'long double' value.

V
 
B

Bryan Parkoff

Victor Bazarov said:
Use %f in printf family of functions to output a value of type 'float'
or 'double'. The reason the same specifier works for both is that any
value of type 'float' is promoted to 'double' anyway.



The Standard specifies that you should use 'L' width prefix (like

%.2Lf

) to output a 'long double' value.

V
Victor,

Thank you for providing me the answer, but this decimal has 6 digits
after ".". It needs to be 10 digits to 20 digits for best accuracy decimal
numbers. How is it possible that %Lf help? %Lf still provides 6 digits.
How do I overcome the limit?

Bryan Parkoff
 
M

Mike Wahler

Bryan Parkoff said:
Victor,

Thank you for providing me the answer, but this decimal has 6 digits
after ".". It needs to be 10 digits to 20 digits for best accuracy
decimal numbers. How is it possible that %Lf help? %Lf still provides 6
digits. How do I overcome the limit?

printf("%.20Lf\n", b);

Where's your textbook?

-Mike
 
J

Jack Klein

The guideline says to use %f in printf() function using the keyword
float and double.

For example

float a = 1.2345;
double b = 5.166666667;

printf("%.2f\n %f\n", a, b);

Now, I want to use long double rather than double to get accuracy
decimal number which it is larger than normal number such as billion
numbers. I assume that it would be %dd, but it is not correct. Can you
please provide the information which it support long double, please?
Thanks...

Bryan Parkoff

Victor and Mike have answered the questions you actually asked, but
note that if you are using Microsoft's compiler, long double does not
give you any gain over double.

Microsoft's 16-bit C and C++ compilers used to use the full 80-bit
extended precision format of the Intel FPU for long double, but when
they built their first 32-bit compiler for Windows NT, the decided to
use the 64-bit real format for long double as well as double.

The reason? "Under Windows NT, in order to be compatible with other
non-Intel floating point implementations...". Remember when Microsoft
was going to conquer every 32-bit processor in the world?

They decided to take away the programmer's choice to decide whether it
was more important for the application to have the maximum accuracy
the hardware could provide, or to have compatibility with all the MIPS
and ARM desktops machines running Windows NT.

A triumph of marketing over sound engineering, if ever there was one.

Read their own words on the subject:

http://support.microsoft.com/default.aspx?scid=kb;en-us;129209

By the way, their are other compilers for Windows that do not do this,
including Borland and gcc (but a real version of gcc, not Mingw).
 
G

Greg

Jack said:
Victor and Mike have answered the questions you actually asked, but
note that if you are using Microsoft's compiler, long double does not
give you any gain over double.

Microsoft's 16-bit C and C++ compilers used to use the full 80-bit
extended precision format of the Intel FPU for long double, but when
they built their first 32-bit compiler for Windows NT, the decided to
use the 64-bit real format for long double as well as double.

The reason? "Under Windows NT, in order to be compatible with other
non-Intel floating point implementations...". Remember when Microsoft
was going to conquer every 32-bit processor in the world?

They decided to take away the programmer's choice to decide whether it
was more important for the application to have the maximum accuracy
the hardware could provide, or to have compatibility with all the MIPS
and ARM desktops machines running Windows NT.

Hardly. Microsoft decided to offer the customer a choice of hardware
from competing vendors in order to run Windows NT. Microsoft
even-handedly made sure that their compiler adhered to industry
standard floating point format (IEEE) rather than show blatant
favoritism toward a dominant supplier by catering to its proprietary
chip.

Microsoft didn't "take away" anything, since the customer is not
obligated to use their compiler, and Intel's floating point chip is
still capable of 80-bit precision. Nor does either the C or C++
standard require any specific level of precision.

Incidentally, Windows NT never ran on an ARM processor. It did run and
continues to run on the PowerPC (to power the XBox 360).
A triumph of marketing over sound engineering, if ever there was one.

Consistent, verifiable floating point results across platforms is
better engineering than apparently more precise results reproducible on
a single platform - especially when the accuracy of those results is
open to doubt (as the Pentium division bug superbly demonstrated).
Read their own words on the subject:

http://support.microsoft.com/default.aspx?scid=kb;en-us;129209

By the way, their are other compilers for Windows that do not do this,
including Borland and gcc (but a real version of gcc, not Mingw).

Of course, hardcore Intel fans can always use Intel's C/C++ compiler -
which apparently costs the company nothing to write. Intel does give it
away for free. But anyone hoping for a free PowerPC compiler from Intel
is probably out of luck.

"Sound engineering" does not necessarily equate to doing right by the
customer. In this case, Microsoft did do the right thing.

Greg
 
J

Jack Klein

Of course this is off-topic here, but it happens to be a pet peeve of
mine. And since a substantial fraction of comp.lang.c++ uses
Microsoft's Visual C++ compiler, they may be affected by this issue.
Hardly. Microsoft decided to offer the customer a choice of hardware
from competing vendors in order to run Windows NT. Microsoft
even-handedly made sure that their compiler adhered to industry
standard floating point format (IEEE) rather than show blatant
favoritism toward a dominant supplier by catering to its proprietary
chip.

That's quite nice, but the IEEE defines an "IEEE 754 Double-Extended"
format that is just as much part of the standard as the 32-bit
"Single" and 64-bit "Double" floating point formats. Now it just so
happens that the IEEE 754 standard was based largely on Intel's work
in developing the x86 math coprocessor and later FPU, but that's
immaterial. The 80-bit "Double-Extended" format is fully a part of
the IEEE 754 standard, even though its implementation is optional.

As for compatibility, everyone expects the C/C++ float and double
types to use the IEEE 754 formats, and Microsoft had to do nothing at
all to keep their 32-bit compilers compatible with user's
expectations, as their 16-bit compilers had been.
Microsoft didn't "take away" anything, since the customer is not

That is not true at all. I had C programs that delivered accurate
results in Microsoft C compilers from MS-DOS version 5.x through
Visual C++ 1.52, with or without floating point hardware. When
recompiled with Visual C++ 4.0, the first 32-bit version I used, they
delivered different and inaccurate results.

From my point of view, they certainly "took away" accuracy and
precision from the long double data type, features that had existed in
their 16-bit predecessors, and that were still right their in the
processor hardware.
obligated to use their compiler, and Intel's floating point chip is

They have at most times in the past made every effort to ensure that
the maximum possible percentage of x86 Windows programmers used their
compiler. Or so it appears to me, you may disagree. Can you provide
an URL to a link on their web site mentioning other compilers as a
possibility to those who want the 80 bit format back?
still capable of 80-bit precision. Nor does either the C or C++
standard require any specific level of precision.

I am well aware of the fact that neither language requires long double
to have any more precision and range than double. I am also well
aware of the fact that the IEEE "Double" format exceeds the minimum
requirements of C and C++ by a considerable margin.
Incidentally, Windows NT never ran on an ARM processor. It did run and
continues to run on the PowerPC (to power the XBox 360).

Is there any other non x86 architecture running a version of Windows
other then CE these days, or anyone other than Microsoft running it on
a PowerPC?
Consistent, verifiable floating point results across platforms is
better engineering than apparently more precise results reproducible on
a single platform - especially when the accuracy of those results is
open to doubt (as the Pentium division bug superbly demonstrated).

This is utter nonsense. They deliberately deprived the programmer of
the ability to make the choice between maximum accuracy permitted by
the hardware and maximum comparability. Some who use the long double
type know exactly why they use it and what they expect from it.
Generally, those who don't know what they are doing generally just use
float or double.
Of course, hardcore Intel fans can always use Intel's C/C++ compiler -
which apparently costs the company nothing to write. Intel does give it
away for free. But anyone hoping for a free PowerPC compiler from Intel
is probably out of luck.

How does this square with Microsoft's cramming support for MMX, SIMD,
SIMD2, etc., into Visual C++, when these features aren't available on
"all" those other architectures running NT and its descendents these
days?
"Sound engineering" does not necessarily equate to doing right by the
customer. In this case, Microsoft did do the right thing.

"Sound engineering" means leaving the engineering decisions up to the
engineer dealing with the requirements of the particular situation, or
application in this case.

"Sound engineering" does not mean denying the engineer/programmer the
opportunity to make a decision by deciding for him/her that accuracy
and precision were not as important as Microsoft's idea of
portability. The result is that Microsoft Visual C++ is literally
unsuitable for certain types of serious scientific and engineering
applications.

They could quite easily have documented that long double was not
portable across Windows implementations, whereas float and double
were.

There are numerous discussions available online about the harmful
effects of denying the use of the full precision available. For one
such, see http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF
 
W

Walter Bright

Jack Klein said:
There are numerous discussions available online about the harmful
effects of denying the use of the full precision available. For one
such, see http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF

I've never understood why 80 bit long doubles are not universally supported
by compilers. Digital Mars C, C++ and D programming language compilers
certainly support it.

Walter Bright
www.digitalmars.com C, C++, D programming language compilers
 
J

Jack Klein

I've never understood why 80 bit long doubles are not universally supported
by compilers. Digital Mars C, C++ and D programming language compilers
certainly support it.

I have never understood it, either.

As I said before, the reason for Microsoft's decision is stated on
their web page
http://support.microsoft.com/default.aspx?scid=kb;en-us;129209 where
they disclose a hack using assembly language to read back 80-bit long
double values from binary files written by programs using their
earlier 16-bit compilers.

This also applies to compilers like Mingw, which is rather like a true
gcc port in that it provides no libraries of its own, but uses the
Microsoft C library installed on the OS. Naturally it must limit its
long doubles to 64 bits as well, or it would break on calls to the
library.

Fortunately most other compilers do not shackle the programmer the
same way, and I commend your decision to do the right thing with the
Digital Mars tools.
 
G

Greg

Walter said:
I've never understood why 80 bit long doubles are not universally supported
by compilers. Digital Mars C, C++ and D programming language compilers
certainly support it.

Because 80-bit long doubles are not universally supported in hardware.
If the floating point registers of the target microprocessor are
64-bit, it doesn't make a lot of sense to support 80-bit long doubles
in software. Clearly, it would be much more efficient and much more
precise to combine two 64-bit floating point registers to create one
128-bit floating point value- if precision in excess of 64 bits is
desired. gcc 4.0 on the PowerPC follows exactly that approach.

Greg
 
W

Walter Bright

Greg said:
Because 80-bit long doubles are not universally supported in hardware.

That's the conventional reason. I don't buy it, though:

1) Intel forms the vast bulk of platforms
2) By that logic, we should ignore all advances in hardware, like 64 bits
3) 80 bit doubles are a huge advantage for writing accurate floating point
code
4) Ensuring that code that produces wrong answers on 64 bit platforms will
fail as well on 80 bit platforms just doesn't seem like a winning strategy
5) To do this meant that working 16 bit Intel algorithms broke on 32 bit
Intel processors, why does that not matter?
6) Today's hardware is fast enough that 80 doubles can be emulated in
software
7) Supporting 80 bit long doubles does not compromise code designed for 64
bit doubles in any way.

Hence, it should be the PROGRAMMER's decision whether to use long doubles or
not, not the compiler's.
If the floating point registers of the target microprocessor are
64-bit, it doesn't make a lot of sense to support 80-bit long doubles
in software. Clearly, it would be much more efficient and much more
precise to combine two 64-bit floating point registers to create one
128-bit floating point value- if precision in excess of 64 bits is
desired. gcc 4.0 on the PowerPC follows exactly that approach.

If that was done on those non-Intel CPU's, it sounds like a much more
sensible solution.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top