Initializing constants

N

newsposter0123

The code block below initialized a r/w variable (usually .bss) to the
value of pi. One, of many, problem is any linked compilation unit may
change the global variable. Adjusting

// rodata
const long double const_pi=0.0;

lines to

// rodata
const long double const_pi=init_ldbl_pi();

would add additional protections, but is not legal C, and, rightly so,
fails on GCC/i386.

Do any C standards define a means to initalize constants to values
obtained from hardware, or does the total number of constants and/or
cross-compiling prohibit it completely (although, when cross-compiling,
the compiler could create a value using the resources available i.e.
emulation)?

I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.

BEGIN CODE
#include <stdio.h>

// prototype
long double init_ldbl_pi();

// rodata
const long double const_pi=0.0;

// global function
long double init_ldbl_pi() {
long double ldbl=3.14;

#if (defined __GNUC__) && (defined __i386__)
asm("fldpi":::"st");
asm("fstpt %0":"=m" (ldbl)::"st");
#endif
return ldbl;
}

// main function
int main() {
long double ldbl_pi=init_ldbl_pi();
printf("%Le\n", ldbl_pi);
return 0;
}
END CODE
 
B

Ben Pfaff

newsposter0123 said:
I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.

What's the value in getting pi from an i386 instruction? I would
suggest just writing out enough digits of pi to cover the desired
level of significance.
 
N

newsposter0123

Ben said:
What's the value in getting pi from an i386 instruction? I would
suggest just writing out enough digits of pi to cover the desired
level of significance.

Hardware generally provides constants other than pi.

For one, when converting numbers. Ideally you would like to get the
original number back. Especially when adjusting bases. But that debate
rages elsewhere.

Provided the precision of the measurements device was known in advance,
then the number of significant digits could be predetermined. If a
device with greater precision were used, then the constant would be
require adjustment (and no longer be a constant).

In general, constants imply unrestricted usage, not limited to a
specific application, or limiting the number of significant digits in a
calculation. Otherwise, why spend $$$ on precision equipment?
 
B

Ben Pfaff

newsposter0123 said:
Hardware generally provides constants other than pi.

For one, when converting numbers. Ideally you would like to get the
original number back. Especially when adjusting bases. But that debate
rages elsewhere.

Provided the precision of the measurements device was known in advance,
then the number of significant digits could be predetermined. If a
device with greater precision were used, then the constant would be
require adjustment (and no longer be a constant).

But the i386 floating-point architecture also has a fixed
precision. When you obtain your device with greater precision,
you will have to change the code anyhow. So I don't see the
benefit.
 
N

newsposter0123

Ben said:
But the i386 floating-point architecture also has a fixed
precision.
Yep. Some hardware provides an 128bit long double.
When you obtain your device with greater precision,
you will have to change the code anyhow.

Depending on how the application was implemented, maybe. If the
calculations could be completed using the long double type, then no.
But the library/ api providing and/or using the constant would not
require adjustment.
So I don't see the
benefit.
Much easier for the programmer to implement.

BTW, this is off topic, but if a measurment device (micrometer) is
graduated in thousands of an inch and has range 0-2 inch, would the
measurement .053 contain 3 or 4 significant digits and be written as
5.300e-2 or 5.30e-2? I'm pretty sure the measurement 1.053 would have
4.
 
B

Ben Pfaff

newsposter0123 said:
Yep. Some hardware provides an 128bit long double.


Depending on how the application was implemented, maybe. If the
calculations could be completed using the long double type, then no.
But the library/ api providing and/or using the constant would not
require adjustment.

You said, in the article that I originally replied to, that you
wanted to use an i386-specific instruction to obtain the value of
pi. You can't use that to obtain more than 80 bits of precision,
and you can't use it except on an i386 machine. Now you're
telling me that this will allow you to obtain greater precision
on a new device without adjusting the library or API.
Much easier for the programmer to implement.

I don't see how it's easier to use an i386-specific instruction
to obtain 80 bits of pi that to write a floating-point constant
that contains 80 bits of pi, and I certainly don't see how it
makes the code more extensible to better precision.
BTW, this is off topic, but if a measurment device (micrometer) is
graduated in thousands of an inch and has range 0-2 inch, would the
measurement .053 contain 3 or 4 significant digits and be written as
5.300e-2 or 5.30e-2? I'm pretty sure the measurement 1.053 would have
4.

Sounds like a trick question to me. I would think that .053 has
2 significant digits.
 
N

newsposter0123

Ben said:
You said, in the article that I originally replied to Appreciate your reply.
that you
wanted to use an i386-specific instruction to obtain the value of
pi.

The specific example I used was for gcc running on i386, creating i386
targets. Generally, I was interested in how the C standards would view
such an method for initializing readonly (.rodata) constants.
You can't use that to obtain more than 80 bits of precision,
and you can't use it except on an i386 machine. Exactly.

Now you're
telling me that this will allow you to obtain greater precision
on a new device without adjusting the library or API.

I don't have to adjust the constant pi in my calculator every time I
evaluate a new equation. I certainly would not want applications to
adjust the value of a library exported "constant" every time it was
used, which, if it were a read only constant would be illegal C.
I don't see how it's easier to use an i386-specific instruction
to obtain 80 bits of pi that to write a floating-point constant
that contains 80 bits of pi and I certainly don't see how it
makes the code more extensible to better precision.

I think it would be easier to write LDBL_PI then to write "3.14...." to
25 or 30 digits of accuracy (or whatever it takes to get the most
accurate value of the constant for the arch dependent implementation of
long double). Plus, assuming the compiler uses the maximum precision
available for the long double type at compile time (either from
hardware or emulation), I doubt it could calculate (due to chopping,
rounding, etc.), based on a "3.14...." string, a value of pi more
accurate than a hardware value of pi. This would probably be true for
any other hardware supplied constant.
> Sounds like a trick question to me. I would think that .053 has
2 significant digits.

Hmmm. I guess the tenths 0 must be used to position the decimal point,
and is, therefore ,not significant.
 
S

Skarmander

newsposter0123 said:
The code block below initialized a r/w variable (usually .bss) to the
value of pi. One, of many, problem is any linked compilation unit may
change the global variable.

Well, don't do that then.

Lame answer? Sure. You can't have your cake and eat it too, though -- being
accessible from anywhere is why global variables are used and shouldn't be.
Adjusting

// rodata
const long double const_pi=0.0;

lines to

// rodata
const long double const_pi=init_ldbl_pi();

would add additional protections, but is not legal C, and, rightly so,
fails on GCC/i386.
It's more correct to say that it's not strictly conforming C. It's legal for
an implementation to compile this, however, since an implementation may
allow additional forms of constant expressions.

Obviously, most won't, certainly not arbitrary functions.
Do any C standards define a means to initalize constants to values
obtained from hardware

What do you mean by "C standards"? If the latest standard doesn't have it,
the earlier ones probably won't, either. If you just mean "any standard
written that involves the C language", a la POSIX, that's a different story.

Obviously, "values obtained from hardware" would have a hard time getting
standardized by anything.
or does the total number of constants and/or cross-compiling prohibit it
completely (although, when cross-compiling, the compiler could create a
value using the resources available i.e. emulation)?
No idea what you're getting at, here.
I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.
You don't really want a constant expression (which has a very specific
meaning in C), but a read-only value. C does not directly implement such
semantics. Many platforms will allow you to implement this one way or the
other (with linker directives or virtual memory protection) but otherwise
nothing but good discipline will help.

If you *do* really want a compile-time constant expression based on some
arbitrary platform-specific calculation, you're basically asking your
compiler for magic. It's not wise to expect that.

If you really, really want this, use code generation and put the constant in
a separate header. Of course, platform-specific code generation has its own
problems -- the main one here being that you have to implement and run a
separate utility just to get the actual program to compile. This basically
means you're extending the implementation itself to give you what you want,
a powerful but difficult-to-maintain and easily abused technique.

S.
 
N

newsposter0123

Skarmander said:
Well, don't do that then.

I am trying hard to avoid it.
It's more correct to say that it's not strictly conforming C. It's legal
for an implementation to compile this, however, since an implementation
may allow additional forms of constant expressions.

Yes, I'm assuming that the implementation places all constants in a
separate section (Probably ELF/COFF specific here) protected from
writes at runtime that could not be adjusted up during initialization
at runtime.
Obviously, most won't, certainly not arbitrary functions.
Maybe using "hooks?"
What do you mean by "C standards"?
Who knows? They all have lots of sections, and numerous
interpretations.
If the latest standard doesn't have
it, the earlier ones probably won't, either.
Allowed usage may be implemented at any time I guess.
If you just mean "any
standard written that involves the C language", a la POSIX, that's a
different story. Correct.

Obviously, "values obtained from hardware" would have a hard time
getting standardized by anything.
Yes, more competent persons would have to write the exact wording,
assuming the standard does not currently allow it.
No idea what you're getting at, here.

When cross compiling the hardware running the compiler and the compiler
that compiled the compiler (is that straight?) may use a different
implementation for long double and/or the constant. Ex Microsoft just
uses double as long doubles and therefore may not be able to create a
"most accurate" constant.
You don't really want a constant expression (which has a very specific
meaning in C), but a read-only value. Correct.

C does not directly implement such
semantics.
So it would have to be "added". An arduous process at best.
Many platforms will allow you to implement this one way or
the other (with linker directives or virtual memory protection) but
otherwise nothing but good discipline will help.
Right, but nothing portable, unfortunately.
If you *do* really want a compile-time constant expression based on some
arbitrary platform-specific calculation, you're basically asking your
compiler for magic. It's not wise to expect that.
I'm just happy with their current magic. But, bells and whistles, no
matter how minute, can make a big difference.
If you really, really want this, use code generation and put the
constant in a separate header. Of course, platform-specific code
generation has its own problems -- the main one here being that you have
to implement and run a separate utility just to get the actual program
to compile. This basically means you're extending the implementation
itself to give you what you want, a powerful but difficult-to-maintain
and easily abused technique.

So far, for specific platforms, I'm initializing an array of
sizeof(long double) bytes with the byte codes obtained from the
hardware instruction. This makes way to many assumptions to be
implemented in a portable way. Just with GCC, the -m96bit-long-double
and -m128bit-long-double options complicate things for i386 (although
the extra zeros in the 128bit shouldn't interfere with the 96bit
value).
 
S

Skarmander

newsposter0123 said:
I am trying hard to avoid it.


Yes, I'm assuming that the implementation places all constants in a
separate section (Probably ELF/COFF specific here) protected from
writes at runtime that could not be adjusted up during initialization
at runtime.
That's not enough, I'm afraid. The implementation would also have to fix up
any evaluations involving the not-so-constant expression as if they took
place at compile time. "A constant expression can be evaluated during
translation rather than runtime, and accordingly may be used in any place
that a constant may be." This is what "constant" actually means in C: an
expression that can be evaluated at translation time.

This is why it's impossible for a compiler to allow
const long x = foo();
with foo() an arbitrary function, because to solve this in general, the
compiler would have to be capable of deferring translation of the entire
program! A C *interpreter* could do it, but that's probably not what you're
looking for.
Maybe using "hooks?"
Not for what C calls a constant expression, per the above. But you're
talking about a read-only expression, which could conceivably be done.
Who knows? They all have lots of sections, and numerous
interpretations.
The intent is for the number of implementations to be unlimited, but the
number of interpretations to be quite limited. Preferably to one. The draft
copy of the C99 standard I have does a pretty good job.
Allowed usage may be implemented at any time I guess.
C has very few "optional" parts, and those are mostly restricted to being
"implementation-defined" (so the actual behavior needs to be documented) or
they don't actually guarantee something useful will happen (because a
platform may not support it at all). Either is useless for your purpose.

In any case, what is contained in the standard and what is provided by
implementations are conceptually different things. If the standard doesn't
have it, it can't be done portably; if the standard has it, it may be done
portably. In this case the standard doesn't have it.
Yes, more competent persons would have to write the exact wording,
assuming the standard does not currently allow it.
What I meant was that not all the competence in the world could condense
this into something usable. The concept is much too broad.
When cross compiling the hardware running the compiler and the compiler
that compiled the compiler (is that straight?) may use a different
implementation for long double and/or the constant. Ex Microsoft just
uses double as long doubles and therefore may not be able to create a
"most accurate" constant.
I now have some idea what you're getting at, but it just illustrates why
what you want is impractical.

Implementations do support implementation-specific constant expressions,
namely exactly those required by the standard. For example, INT_MAX (from
<limits.h>) must be the maximum value that can be stored in an int; it has
to be a constant expression.

That's quite reasonable, but how reasonable is "the value of pi calculated
to as many digits as this implementation supports"? This idea clearly does
not generalize.

For the specific example I mentioned, some implementations define a constant
M_PI which evaluates to a double representing some approximation of pi. I
don't know if it's governed by any standard (not the C standard, in any
case), and even if it is that may not guarantee that the maximum precision
be used, and even if it *does* it may not be practical to provide such a
definition, since it's impossible to implement portably. (Many C library
implementations are at least semi-portable.)
So it would have to be "added". An arduous process at best.
I don't know. C++ did it. Is using C++ an option for you?

C++ has different semantics, where code like

int foo(void);
const int x = foo();

is allowed. (Note that "foo()" is still not a constant expression here, nor
is "x"; the value of x is just read-only after initialization.)
Right, but nothing portable, unfortunately.
If you're satisfied with a read-only runtime value, then it's easy enough to
implement in C: just write a function that returns the value, or have a
doubly-const pointer to it (the latter can be subverted by people who really
want to, of course, but not silently).

static double const_pi_intern;
const double * const const_pi = &const_pi_intern;

void initialize_const_pi() {
/* ... */
}

or

double const_pi() {
static double const_pi_intern = 0.0;
if (const_pi_intern == 0.0) {
/* Initialize */
}
return const_pi_intern;
}

Neither solution gives you a compile-time constant and both may involve some
runtime overhead (which the compiler may be able to optimize away), so then
the question becomes: what's more important?
So far, for specific platforms, I'm initializing an array of
sizeof(long double) bytes with the byte codes obtained from the
hardware instruction. This makes way to many assumptions to be
implemented in a portable way. Just with GCC, the -m96bit-long-double
and -m128bit-long-double options complicate things for i386 (although
the extra zeros in the 128bit shouldn't interfere with the 96bit
value).
Just mark the function that has to initialize the value as non-portable and
requiring a separate implementation on each platform; you can fill the value
with a nonsense constant and barf on startup if the initialization function
isn't implemented.

You seem to be trying to solve the problem of "how to implement something
that cannot be done portably in a portable way". You don't; you isolate it,
flag it as a requirement and move on.

In this case, simply requiring that a particular macro expand to the
constant pi with a particular platform-dependent precision doesn't seem too
much of a burden on would-be porters.

S.
 
J

Jack Klein

The code block below initialized a r/w variable (usually .bss) to the
value of pi. One, of many, problem is any linked compilation unit may
change the global variable. Adjusting

// rodata
const long double const_pi=0.0;

lines to

// rodata
const long double const_pi=init_ldbl_pi();

would add additional protections, but is not legal C, and, rightly so,
fails on GCC/i386.

Do any C standards define a means to initalize constants to values
obtained from hardware, or does the total number of constants and/or
cross-compiling prohibit it completely (although, when cross-compiling,
the compiler could create a value using the resources available i.e.
emulation)?

I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.

BEGIN CODE
#include <stdio.h>

// prototype
long double init_ldbl_pi();

// rodata
const long double const_pi=0.0;

As others have pointed out, you just can't do this in C. But there
are two workarounds, this one if you can live with a level of pointer
redirection and an initialization function call in main:

file: global_const.h:
extern const long double *const_pi;
====

file: global_const.c:
long double *const_pi;
static long double const_pi_value;

void init_const_pi(void)
{
const_pi_value = call_some_func();
}

....of course, you must be sure to call the initialization function
before dereferencing the pointer to get the value.

There's another way that eliminates the initialization function,
somewhat reminiscent of the C++ singleton pattern:

file: global_const.h:
long double get_const_pi(void);
====

file: global_const.c:
static long double const_pi_value;

long double get_const_pi(void)
{
if (const_pi_value == 0.0)
{
const_pi_value() = call_some_func();
}
return const_pi_value;
}

In both cases, the initialization function can perform some additional
checking on the first call to decide whether to call some function or
to substitute a constant.
 
T

Thad Smith

newsposter0123 said:
I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.

The straight-forward way to access the constant pi is to define a macro
with enough digits to satisfy any potential target. For targets with
less precision, the number will be rounded to the approximation for the
specified type for the implementation. You could initialize a const
variable with that value if you wish, but there is no particular
advantage to doing that, as opposed to using a defined constant.

In this case I think the simplest way works best. It is
standard-conforming, obvious, (almost always) the most accurate, and the
most efficient.
 
N

newsposter0123

Skarmander said:
That's not enough, I'm afraid. The implementation would also have to fix
up any evaluations involving the not-so-constant expression as if they
took place at compile time. "A constant expression can be evaluated
during translation rather than runtime, and accordingly may be used in
any place that a constant may be." This is what "constant" actually
means in C: an expression that can be evaluated at translation time.

This is why it's impossible for a compiler to allow
const long x = foo();
with foo() an arbitrary function, because to solve this in general, the
compiler would have to be capable of deferring translation of the entire
program! A C *interpreter* could do it, but that's probably not what
you're looking for.

A compiler could provide a LDBL_PI predefined macro (similar to
__LINE__, __FUNCTION__, etc.), whose value was (1) based on the actual
hardware value, if compiling on a system that provides it, or, (2)
based on a stored value, if one was available for the target, or, (3)
based on best approximation, calculated from available resources. With
the number of constants available in the universe, this could lead to
considerable bloat. The total number is reasonable if limited to CPU
hardware supplied constants.
I now have some idea what you're getting at, but it just illustrates why
what you want is impractical.
Well, the
const long double ldbdl_pi=some_function();
part definitely is.
I don't know. C++ did it. Is using C++ an option for you?
Yes, but I would like to remain with C.
If you're satisfied with a read-only runtime value
Thats the idea.
then it's easy
enough to implement in C
static double const_pi_intern;
const double * const const_pi = &const_pi_intern;

void initialize_const_pi() {
/* ... */
}
or

double const_pi() {
static double const_pi_intern = 0.0;
if (const_pi_intern == 0.0) {
/* Initialize */
}
return const_pi_intern;
}

What I am doing now is something like this:

#if _MSC_VER >= 1200
const union {
unsigned char v[sizeof(long double)];
long double d;
} g_ldbl_pi= {0x18,0x2D,0x44,0x54,0xFB,0x21,0x09,0x40};
#define ldbl_pi g_ldbl_pi.d
#elseif (defined __GNUC__) && (defined __i386__)
const union {
unsigned char v[sizeof(long double)];
long double d;
} g_ldbl_pi=
{0x35,0xc2,0x68,0x21,0xa2,0xda,0x0f,0xc9,0x00,0x40,0xea,0xbf};
#define ldbl_pi g_ldbl_pi.d
#else
const long double ldbl_pi=3.14;
#endif

/* main function */
int main() {
printf("%Le\n", ldbl_pi);
return 0;
}
In this case, simply requiring that a particular macro expand to the
constant pi with a particular platform-dependent precision doesn't seem
too much of a burden on would-be porters.
Or even for addition to compilers.
 
W

Walter Roberson

Skarmander wrote:
Or even for addition to compilers.

You've been talking about hardware PI and so on, but you have
neglected to discuss rounding modes. If a simple #define is not
sufficient for your purposes, then chances are that a single PI
is not sufficient for your purpose: you would likely want
"PI in the current rounding mode". Which becomes more
problematic as a compile time constant if rounding modes can
be changed by program action.
 
S

Skarmander

newsposter0123 said:
Skarmander wrote:

Or even for addition to compilers.
Sure. And now I want e, Planck's constant and the square root of 12 to some
platform-defined precision, and the guy behind me has a whole class of
expressions he'd like to be evaluated at compile time...

It has to stop somewhere. Like I said, M_PI is provided on many platforms,
though it doesn't come with any guarantees as far as I'm aware, and it
usually has double precision, not long double precision.

S.
 
N

newsposter0123

Walter said:
You've been talking about hardware PI and so on, but you have
neglected to discuss rounding modes.

This could be a prolonged discussion on a math library group, but, C
provides simple long double operators (*, /, etc.). Presumably the
constants would be implemented for the target/mode used for these
operations. On the x87, the constant value is independent of the
rounding/ chopping mode, "hardwired in", and has full accuracy. I'm not
sure about other hardware.
 
N

newsposter0123

Skarmander said:
Sure. And now I want e, Planck's constant and the square root of 12 to
some platform-defined precision, and the guy behind me has a whole class
of expressions he'd like to be evaluated at compile time...

It has to stop somewhere. Like I said, M_PI is provided on many
platforms, though it doesn't come with any guarantees as far as I'm
aware, and it usually has double precision, not long double precision.
I'm beginning to think that using available math libraries is the best
solution if doubles do not work for a specific application. Long
doubles just do not have the same portability yet.
 
B

Ben Pfaff

newsposter0123 said:
I'm beginning to think that using available math libraries is the best
solution if doubles do not work for a specific application. Long
doubles just do not have the same portability yet.

Long double has been in the C standard since 1989. If they
aren't portable now, by your standards, then they may never be.
 
J

jacob navia

Skarmander said:
Sure. And now I want e, Planck's constant and the square root of 12 to
some platform-defined precision, and the guy behind me has a whole class
of expressions he'd like to be evaluated at compile time...

It has to stop somewhere. Like I said, M_PI is provided on many
platforms, though it doesn't come with any guarantees as far as I'm
aware, and it usually has double precision, not long double precision.

S.

Yes, and that can be a problem. One of the ways to solve that is

#define M_PI 3.14159265358979323846
#define M_PIL 3.1415926535897932384626433832795029L

Since those identifiers are not standard, it is difficult to propose
this. In lcc-win32 this is only defined if you are NOT in -ansic
mode.

But obviously this MUST stop somewhere as you said. One of the
best solutions would be to define all constants in long double
precision:
#define M_PI_2 1.5707963267948966192313216916397514L /* pi/2 */
but that could make that expressions are promoted to long double
precision and done in higher precision.

To avoid that, you can ((double)M_PI_2), but this is difficult too,
since many compilers (lcc-win32 included) do not respect this cast
sometimes.

Yes numerical analysis and numerical software is quite a mess.

jacob
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,265
Latest member
TodLarocca

Latest Threads

Top