J
John
Hi,
I'm writing a code that I would like to compile in either single or
double precision. So, I declare all my floating point variables with a
typedef such as
//typedef float FLOAT
typedef double FLOAT
However, for expressions like
typedef float FLOAT
FLOAT a,b;
a = 1.0;
b = 2.0 + a;
I get pages and pages of warning about possible loss of data in the
conversion from double to float. Is there any way to parameterize the
numerical constants like 1.0 so that they can easily be converted. Or
are my only option to suppress the warning through compiler flags or
cast every numerical constant.
I'm thinking of something like Fortran where you can parameterize a
numerical constant as
1.0_pr
where you can define '_pr' to be your desired precision.
Thanks,
John
I'm writing a code that I would like to compile in either single or
double precision. So, I declare all my floating point variables with a
typedef such as
//typedef float FLOAT
typedef double FLOAT
However, for expressions like
typedef float FLOAT
FLOAT a,b;
a = 1.0;
b = 2.0 + a;
I get pages and pages of warning about possible loss of data in the
conversion from double to float. Is there any way to parameterize the
numerical constants like 1.0 so that they can easily be converted. Or
are my only option to suppress the warning through compiler flags or
cast every numerical constant.
I'm thinking of something like Fortran where you can parameterize a
numerical constant as
1.0_pr
where you can define '_pr' to be your desired precision.
Thanks,
John