Why are hexadecimal float literals so goofy?


S

Simon

I looked over the C99 spec for hexadecimal constants. They seem quite
odd on two accounts. For one, it is a binary exponent value, but the
other number is hexadecimal. Why not make them both hexadecimal? You
really don't lose anything...you just have to shift your radix point a
bit. This just seems confusing as it is.

The other is even stranger. The binary exponent value is actually
represented in decimal. What is decimal doing here? Isn't it
satisfied already being the only choice so far even though it
introduces round off errors and results in a less elegant compiler
implementation? Decimal is like a virus that decays elegance,
simplicity, and happiness.

Cheers
Simon
 
Ad

Advertisements

B

Ben Pfaff

Simon said:
I looked over the C99 spec for hexadecimal constants. They seem quite
odd on two accounts. For one, it is a binary exponent value, but the
other number is hexadecimal. Why not make them both hexadecimal? You
really don't lose anything...you just have to shift your radix point a
bit. This just seems confusing as it is.

I have always assumed that hexadecimal constants were written in
hexadecimal instead of binary just to save space. That is, the
underlying representation of floating-point numbers is assumed to
be in base 2, and one writes those in base 16 just to avoid
making the textual representation very long.
The other is even stranger. The binary exponent value is actually
represented in decimal. What is decimal doing here? Isn't it
satisfied already being the only choice so far even though it
introduces round off errors and results in a less elegant compiler
implementation? Decimal is like a virus that decays elegance,
simplicity, and happiness.

Representing the exponent in decimal doesn't introduce any
round-off errors, and it's easier for humans to read and
understand than representing it in binary or hexadecimal.
 
I

Ian Collins

I have always assumed that hexadecimal constants were written in
hexadecimal instead of binary just to save space. That is, the
underlying representation of floating-point numbers is assumed to
be in base 2, and one writes those in base 16 just to avoid
making the textual representation very long.

That is the logical conclusion. Although I am curious as to whether
anyone actually uses hexadecimal float literals!
 
Ad

Advertisements

J

James Kuyper

I have always assumed that hexadecimal constants were written in
hexadecimal instead of binary just to save space. That is, the
underlying representation of floating-point numbers is assumed to
be in base 2, and one writes those in base 16 just to avoid
making the textual representation very long.

Yes, that is the main purpose, depending upon how you meant it.

A floating point number with N bits of precision can be exactly in
ceil(N/4.0) hexedecimal digits, but requires ceil(N*ln(2)/ln(10)) (+1?)
decimal digits to be expressed with sufficient precision to uniquely
determine what all N of those bits should be. That's only a 20% more
digits; it's not much of an motivation for using hexadecimal notation.

However, to express the number exactly would require N decimal digits -
that's the big advantage of hexadecimal floating point constants.

This advantage only applies when FLT_RADIX is a power of 2, but that's
by far the most common case. The most common alternative to FLT_RADIX==2
is FLT_RADIX==16, which provides the same advantage. I've also heard of
platforms where FLT_RADIX of 3 or 10 would be appropriate, but I don't
know if any implementation of C was ever created for such platforms.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top