Why are hexadecimal float literals so goofy?

S

Simon

I looked over the C99 spec for hexadecimal constants. They seem quite
odd on two accounts. For one, it is a binary exponent value, but the
really don't lose anything...you just have to shift your radix point a
bit. This just seems confusing as it is.

The other is even stranger. The binary exponent value is actually
represented in decimal. What is decimal doing here? Isn't it
satisfied already being the only choice so far even though it
introduces round off errors and results in a less elegant compiler
implementation? Decimal is like a virus that decays elegance,
simplicity, and happiness.

Cheers
Simon

B

Ben Pfaff

Simon said:
I looked over the C99 spec for hexadecimal constants. They seem quite
odd on two accounts. For one, it is a binary exponent value, but the
really don't lose anything...you just have to shift your radix point a
bit. This just seems confusing as it is.

I have always assumed that hexadecimal constants were written in
underlying representation of floating-point numbers is assumed to
be in base 2, and one writes those in base 16 just to avoid
making the textual representation very long.
The other is even stranger. The binary exponent value is actually
represented in decimal. What is decimal doing here? Isn't it
satisfied already being the only choice so far even though it
introduces round off errors and results in a less elegant compiler
implementation? Decimal is like a virus that decays elegance,
simplicity, and happiness.

Representing the exponent in decimal doesn't introduce any
round-off errors, and it's easier for humans to read and
understand than representing it in binary or hexadecimal.

I

Ian Collins

I have always assumed that hexadecimal constants were written in
underlying representation of floating-point numbers is assumed to
be in base 2, and one writes those in base 16 just to avoid
making the textual representation very long.

That is the logical conclusion. Although I am curious as to whether
anyone actually uses hexadecimal float literals!

J

James Kuyper

I have always assumed that hexadecimal constants were written in
underlying representation of floating-point numbers is assumed to
be in base 2, and one writes those in base 16 just to avoid
making the textual representation very long.

Yes, that is the main purpose, depending upon how you meant it.

A floating point number with N bits of precision can be exactly in
ceil(N/4.0) hexedecimal digits, but requires ceil(N*ln(2)/ln(10)) (+1?)
decimal digits to be expressed with sufficient precision to uniquely
determine what all N of those bits should be. That's only a 20% more
digits; it's not much of an motivation for using hexadecimal notation.

However, to express the number exactly would require N decimal digits -