Simple Casting Question

A

alex.j.k2

Hello all,


I have "PRECISION" defined in the preprocessor code
and it could be int, float or double, but I do not know in the
code what it is.

Now if I want to assign zero to a "PRECISION" variable,
which of the following lines are correct:


PRECISION myVar = (PRECISION) 0; /* One */

PRECISION myVar = (PRECISION) 0.0; /* Two */

PRECISION myVar = 0; /* Three */

PRECISION myVar = 0.0; /* Four */

Thank you in advance,
Alex
 
A

Amandil

Hello all,

I have "PRECISION" defined in the preprocessor code
and it could be int, float or double, but I do not know in the
code what it is.

Now if I want to assign zero to a "PRECISION" variable,
which of the following lines are correct:

PRECISION myVar = (PRECISION) 0; /* One */

PRECISION myVar = (PRECISION) 0.0; /* Two */

PRECISION myVar = 0; /* Three */

PRECISION myVar = 0.0; /* Four */

Thank you in advance,
Alex

You could try the different combinations and see which gives you an
error, and which works properly. For example, assuming PRECISION is a
float:
1) float myVar = (float) 0; /* cast not needed, the compiler does
it automatically */
2) float myVar = (float) 0.0; /* 0.0 defaults to double, cast doesn't
hurt */
3) float myVar = 0; /* Compiler performs cast on its own */
4) float myVar = 0.0; /* 0.0 defaults to double, but this is
okay */

Now, assuming PRECISION is an integral type, such as int:
1) int myVar = (int) 0; /* A waste of typing time */
2) int myvar = (int) 0.0; /* The compiler can do this at compile
time, but bad idea */
3) int myVar = 0; /* That's how we usually do it */
4) int myVar = 0.0; /* That one is a compiler error */

So method 4 is problematic, method 3 is the least typing, methods 1
and 2 are okay too.

Try this on your own, I hope I'm not wrong.

-- Marty Wolfe

"Very simple. This is method three." -- Alfred Hitchcock, "Three
Ways to Rob a Bank"
 
K

Keith Thompson

I have "PRECISION" defined in the preprocessor code
and it could be int, float or double, but I do not know in the
code what it is.

Now if I want to assign zero to a "PRECISION" variable,
which of the following lines are correct:


PRECISION myVar = (PRECISION) 0; /* One */

PRECISION myVar = (PRECISION) 0.0; /* Two */

PRECISION myVar = 0; /* Three */

PRECISION myVar = 0.0; /* Four */

All four are correct. The third is preferred, IMHO.

It does seem odd that PRECISION can be either an integer type or a
floating-point type. Integer and floating-point types behave in very
different, and sometimes quite subly different, ways. I would think
that writing code that doesn't know whether it's dealing with integers
or floating-point values would be difficult and error-prone.
 
K

Keith Thompson

Amandil said:
You could try the different combinations and see which gives you an
error, and which works properly. For example, assuming PRECISION is a
float:
1) float myVar = (float) 0; /* cast not needed, the compiler does
it automatically */
2) float myVar = (float) 0.0; /* 0.0 defaults to double, cast doesn't
hurt */
3) float myVar = 0; /* Compiler performs cast on its own */
4) float myVar = 0.0; /* 0.0 defaults to double, but this is
okay */

Now, assuming PRECISION is an integral type, such as int:
1) int myVar = (int) 0; /* A waste of typing time */
2) int myvar = (int) 0.0; /* The compiler can do this at compile
time, but bad idea */
3) int myVar = 0; /* That's how we usually do it */
4) int myVar = 0.0; /* That one is a compiler error */

So method 4 is problematic, method 3 is the least typing, methods 1
and 2 are okay too.

Try this on your own, I hope I'm not wrong.

I'm afraid you are, which you would have learned if you had tried the
code yourself before posting it.

The casts are unnecessary in any case. Any arithmetic type can be
implicitly converted to any other arithmetic type. In all of these:

int x = (int)0;
int x = (int)0.0;
float x = (float)0;
float x = (float)0.0;

the cast is entirely superfluous; either the expression is already of
the target type, or the conversion will be done implicitly and the
cast is unnecessary. (Note that "cast" refers to the explicit
operator that specifies a conversion. There's no such thing as an
"implicit cast"; it's an "implicit conversion".)

All casts should be viewed with suspicion. There are definitely cases
where they're necessary and appropriate, but it's usually better to
rely on the implicit conversions defined by the language (since the
compiler will guarantee that the conversion is to the correct target
type).

And yes, this:

int x = 0.0;

is perfectly legal; the double value 0.0 is implicitly converted to
the int value 0.
 
B

Bart

Hello all,

   I have "PRECISION" defined in the preprocessor code
and it could be int, float or double, but I do not know in the
code what it is.

   Now if I want to assign zero to a "PRECISION" variable,
which of the following lines are correct:

   PRECISION myVar = (PRECISION) 0; /* One */

   PRECISION myVar = (PRECISION) 0.0; /* Two */

   PRECISION myVar = 0; /* Three */

   PRECISION myVar = 0.0; /* Four */

For setting to zero, no problems; all do the same, and (3) is
shortest.

But for setting to 9.82 for example, only (2) and (4) can be used. And
when PRECISION is int, you may find it rounds wrongly (to 9 not 10).

Also determining a printf/scanf formatcode for PRECISION could be
awkward.
 
A

alex.j.k2

First of all, thank you to all that answered.

All four are correct. The third is preferred, IMHO.

This is the consensus.
It does seem odd that PRECISION can be either an integer type or a
floating-point type. Integer and floating-point types behave in very
different, and sometimes quite subly different, ways. I would think
that writing code that doesn't know whether it's dealing with integers
or floating-point values would be difficult and error-prone.

I have a function that does large amounts of computations
(divisions, square roots, etc.) for large datasets. For efficiency
reasons I allow the option to format the data differently.
I also have a large dataset of integers which I have to feed to the
function.

I do make sure in the preprocessor code that all casting is done
only in the form (int to float), (float to double), (int to double),
or just casting identical types, so I hope that I can avoid some
errors that way.

I am curios about the subtle and not so subtle bugs introduced
by overenthusiastic uses of casting.
Do you have a reference/link/short paragraph related to this theme?

Thank you again for your answers,
Alex
 
A

Amandil

I'm afraid you are, which you would have learned if you had tried the
code yourself before posting it.

The casts are unnecessary in any case. Any arithmetic type can be
implicitly converted to any other arithmetic type. In all of these:

int x = (int)0;
int x = (int)0.0;
float x = (float)0;
float x = (float)0.0;

the cast is entirely superfluous; either the expression is already of
the target type, or the conversion will be done implicitly and the
cast is unnecessary. (Note that "cast" refers to the explicit
operator that specifies a conversion. There's no such thing as an
"implicit cast"; it's an "implicit conversion".)

All casts should be viewed with suspicion. There are definitely cases
where they're necessary and appropriate, but it's usually better to
rely on the implicit conversions defined by the language (since the
compiler will guarantee that the conversion is to the correct target
type).

And yes, this:

int x = 0.0;

is perfectly legal; the double value 0.0 is implicitly converted to
the int value 0.

You're right. I didn't take the time to test it before I posted. And I
was surprised to see that assigning a float value to an int resulted
in an implicit conversion, but you're right about that, too. I guess I
still have a lot to learn.

-- Marty Amandil Wolfe (I know Mom uses my full name only when I'm in
trouble...)
 
R

Richard Bos

Keith Thompson said:
The casts are unnecessary in any case. Any arithmetic type can be
implicitly converted to any other arithmetic type. In all of these:

int x = (int)0;
int x = (int)0.0;
float x = (float)0;
float x = (float)0.0;

the cast is entirely superfluous; either the expression is already of
the target type, or the conversion will be done implicitly and the
cast is unnecessary. (Note that "cast" refers to the explicit
operator that specifies a conversion. There's no such thing as an
"implicit cast"; it's an "implicit conversion".)
int x = 0.0;

is perfectly legal; the double value 0.0 is implicitly converted to
the int value 0.

One important thing to note here is that other values - that is, values
that cannot be represented in the target type - may have unexpected and
indeed highly undesired effects; another important thing to note is that
in the case of 0 (or 0.0), that specific value must, obviously, be
representable in all arithmetic types.

Richard
 
V

vippstar

Whenever <stdlib.h> was not #included and malloc was used,
C89 compilers tended to recommend casting the return value of malloc.
The use of such a cast would suppress that warning,
but yield undefined behavior.

The proper advice, is to #include <stdlib.h> prior to using malloc.
That also suppresses whatever warning
would result from the non-inclusion, but helps make for correct code.
It is also possible to include just the prototype of malloc
void *malloc(size_t); and not <stdlib.h>.
 
B

Barry Schwarz

Whenever <stdlib.h> was not #included and malloc was used,
C89 compilers tended to recommend casting the return value of malloc.
The use of such a cast would suppress that warning,
but yield undefined behavior.

The proper advice, is to #include <stdlib.h> prior to using malloc.
That also suppresses whatever warning

Eliminates, not suppresses.
 
B

Barry Schwarz

First of all, thank you to all that answered.



This is the consensus.

You obviously read enough of the messages to conclude this.

snip
  I do make sure in the preprocessor code that all casting is done
only in the form (int to float), (float to double), (int to double),
or just casting identical types, so I hope that I can avoid some
errors that way.

How did you avoid the equally emphatic consensus that superfluous
casting is a very bad idea? None of the conversions you describe
needs a cast as long as there are prototypes in scope when you call
the functions involved.
 
A

alex.j.k2

How did you avoid the equally emphatic consensus that superfluous
casting is a very bad idea? None of the conversions you describe
needs a cast as long as there are prototypes in scope when you call
the functions involved.

What I meant by my paragraph above is that I force the user to
define the types such that most type conversions / casts are of the
kind described above. Based on the replies here I do plan to rely
more on implicit conversions rather than casting.

I would still like to know more examples of the types of errors
introduced by casting.
Relying on expert opinion is wise, but learning from examples is
more educative.

Alex
 
N

Nick Keighley

First of all, thank you to all that answered.


This is the consensus.

he *could* be using floating point as large integers.

  I have a function that does large amounts of computations
(divisions, square roots, etc.) for large datasets.

not so good...

#include <math.h>
#include <stdio.h>

int main (void)
{
double d1 = 5, d2;
int i1 = 5, i2;

d2 = sqrt(d1);
i2 = sqrt(i1);

printf ("square root 5 is %f\n", d2);
printf ("square root 5 is %d\n", i2);

return 0;
}

I submit if you did a series of computations
using d2 then i2 you'd get wildly diverging results.

For efficiency reasons

more sins have been committed in the name of efficiency...
Do you mean faster? How do you know the floating point
version is too slow? Have you measured it?

I allow the option to format the data differently.
I also have a large dataset of integers which I have to feed to the
function.

  I do make sure in the preprocessor code that all casting is done
only in the form (int to float), (float to double), (int to double),

I don't see how this helps. If you don't know the type how can you
ensure this?

or just casting identical types, so I hope that I can avoid some
errors that way.

why would you cast identical types?

  I am curios about the subtle and not so subtle bugs introduced
by overenthusiastic uses of casting.
  Do you have a reference/link/short paragraph related to this theme?

there's the malloc example.

I once saw this code:

void f (char str)
{
strcpy (str, "bing");
}

It failed to compile. So the programmer fixed it thus:-

void f (char str)
{
strcpy ((char*)str, "bing");
}


To summerise:

1. I think your PRECISION macro is a bad idea
2. avoid casting as much as possible


--
Abuse of casting leads to abuse of the type system
leads to sloppy programming leads to
unreliably, even undefined, behaviour.
And that is the path to the dark side....
Richard Bos/John Hascall
 
A

alex.j.k2

not so good...

#include <math.h>
#include <stdio.h>

int main (void)
{
double d1 = 5, d2;
int i1 = 5, i2;

d2 = sqrt(d1);
i2 = sqrt(i1);

printf ("square root 5 is %f\n", d2);
printf ("square root 5 is %d\n", i2);

return 0;

}

I submit if you did a series of computations
using d2 then i2 you'd get wildly diverging results.

There is not a single place in my code where the
returned value of sqrt() can be attributed to an integer.
more sins have been committed in the name of efficiency...
Do you mean faster? How do you know the floating point
version is too slow? Have you measured it?

It's mainly about the size of the input data. Some of
my datasets are large enough that storing it as float or int
is significantly more efficient that storing it as double.

I don't see how this helps. If you don't know the type how can you
ensure this?

It helps for instance by avoiding the need for casting
and thus relying on implicit conversions. As another example,
all the types that may store square roots, are forced to be
either float or double.

I do know the type in the preprocessor code. I just want
to allow the ability to change the type in said preprocessor
code.
there's the malloc example.

I once saw this code:

void f (char str)
{
strcpy (str, "bing");
}

It failed to compile. So the programmer fixed it thus:-

void f (char str)
{
strcpy ((char*)str, "bing");
}

That's funny. Thanks for the examples.
To summerise:

1. I think your PRECISION macro is a bad idea

Well, some think coding in C is a bad idea. I think
the macro is useful for my purposes.
2. avoid casting as much as possible

That I will do.

Alex
 
K

Keith Thompson

It is also possible to include just the prototype of malloc
void *malloc(size_t); and not <stdlib.h>.

Yes, it's possible, but there's no advantage over ``#include <stdlib.h>''.
 
K

Keith Thompson

What I meant by my paragraph above is that I force the user to
define the types such that most type conversions / casts are of the
kind described above. Based on the replies here I do plan to rely
more on implicit conversions rather than casting.

I would still like to know more examples of the types of errors
introduced by casting.
Relying on expert opinion is wise, but learning from examples is
more educative.

In most cases where a cast is (mis-)used, assuming the writer got the
type right, the code would mean exactly the same thing without the
cast, because the same conversion will be done explicitly. In such
cases, the cast does nothing but add clutter and introduce a risk of
error that wouldn't be there without the cast. (I suggest, therefore,
that the burden is on the advocate of a cast to demonstrate that it's
useful.)

An example where a cast can introduce an error:

x = (float)y;

Assume y is of some arithmetic type. If x is of type float, this is
ok (but unnecessary). But suppose x is of type double (maybe you
changed the declaration a year after you wrote the original code).
Then the explicit conversion to float will lose precision -- and it
will do so without a peep of complaint from the compiler. If you had
just written:

x = y;

the the implicit conversion would have done the right thing.

A cast tells the compiler, "Shut up, I know what type I want" -- and
sometimes you don't, especially as your code is maintained over time.

C90's "implicit int" rule presents another problem. For example:

some_type *ptr = (some_type*)malloc(sizeof(some_type));

If you forgot the required ``#include <stdlib.h>'', then the compiler
(under C90 rules, changed in C99) *assumes* that malloc is a function
returning int. It actually returns void*. One possibility is that
the call will interpret the void* result as if it were of type int,
and then convert the int value to some_type* (which may or may not
"work" depending on the implementation). Or, if integers and pointers
are returned in different registers, you might get complete garbage.

Most compilers these days will warn about this error if you ask them
nicely, but if you just omit the cast the assignment of an int to
a pointer is illegal.

Of course there are cases where casts are necessary and appropriate.
If you need to convert a pointer to an integer, or vice versa, or
between two different pointer types, you need a cast, since those
conversions aren't done explicitly (other than the special case of a
null pointer constant). Such conversions are implementation-defined,
so you'd better know what you're doing. If you're passing an argument
to a variadic function, the compiler doesn't know the target type,
so you have to specify it yourself:

printf("sizeof(double) = %d\n", (int)sizeof(double));

But in these cases, you *have* to get the target type right; the
compiler won't diagnose an error.

See also the comp.lang.c FAQ, <http://www.c-faq.com/>.
 
K

Keith Thompson

CBFalconer said:
I believe this is defined to lead to undefined behaviour (which may
work as you want it). In this case it will usually work, but no
guarantees.

No, if you provide a correct prototype for a standard function, it
will work correctly. (If you get the prototype wrong, of course, the
behavior is undefined.)

Having said that, I can't think of any good reason to write the
prototype yourself rather than getting it from the standard header.
 
I

Ian Collins

Keith said:
No, if you provide a correct prototype for a standard function, it
will work correctly. (If you get the prototype wrong, of course, the
behavior is undefined.)
Is it legal for an implementation to include implementation specific
magic (calling conventions for example) in its standard library function
declarations?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top