cast from function call of type int to non-matching type double

  • Thread starter =?ISO-8859-1?Q?Martin_J=F8rgensen?=
  • Start date
?

=?ISO-8859-1?Q?Martin_J=F8rgensen?=

Hi,

Short question:

Any particular reason for why I'm getting a warning here:
(cast from function call of type int to non-matching type double)


xdouble = (double)rand()/(double)RAND_MAX;


xdouble is ofcourse of type double. So everything should be casted to
type double. Then I don't see why the compiler (gcc) complains. The line
should create a (type double) random number between 0 and 1 and it would
be nice to have this warning go away...


Best regards
Martin Jørgensen
 
N

Nelu

Martin Jørgensen said:
Hi,

Short question:

Any particular reason for why I'm getting a warning here:
(cast from function call of type int to non-matching type double)


xdouble = (double)rand()/(double)RAND_MAX;


xdouble is ofcourse of type double. So everything should be casted to
type double. Then I don't see why the compiler (gcc) complains. The
line should create a (type double) random number between 0 and 1 and
it would be nice to have this warning go away...

This warning should show up on gcc only when a specific compilation
option is set (I don't remember it, something with -W...). It tells
you if you cast a function call to a type that's different, like
double and int. I think it protects against int malloc() [it's void *
malloc() in the proper header is included]
being cast to a pointer.
 
R

Robert Gamble

Martin said:
Hi,

Short question:

Any particular reason for why I'm getting a warning here:
(cast from function call of type int to non-matching type double)


xdouble = (double)rand()/(double)RAND_MAX;


xdouble is ofcourse of type double. So everything should be casted to
type double. Then I don't see why the compiler (gcc) complains. The line
should create a (type double) random number between 0 and 1 and it would
be nice to have this warning go away...

gcc has an option, -Wbad-function-cast, that will warn "whenever a
function call is cast to a non-matching type". Either don't use this
option, explicitly disable is using -Wno-bad-function-cast, or better
yet, lose the superfluous cast:

xdouble = rand()/(double)RAND_MAX;

Robert Gamble
 
?

=?ISO-8859-1?Q?Martin_J=F8rgensen?=

Robert said:
gcc has an option, -Wbad-function-cast, that will warn "whenever a
function call is cast to a non-matching type". Either don't use this
option, explicitly disable is using -Wno-bad-function-cast, or better

Bad idea. I want it to catch "real errors", but not those intentionally
casts I make.
yet, lose the superfluous cast:

xdouble = rand()/(double)RAND_MAX;

Great (problem solved)! I just don't understand: I won't complain if I
divided two numbers of type double and store the result as a type double
-> therefore the cast. So, rand() returns an integer, which is divided
by a type double. This doesn't give any warning....?

What exactly is a "non-matching type"?


Best regards
Martin Jørgensen
 
K

Keith Thompson

Martin Jørgensen said:
Short question:

Any particular reason for why I'm getting a warning here:
(cast from function call of type int to non-matching type double)


xdouble = (double)rand()/(double)RAND_MAX;


xdouble is ofcourse of type double. So everything should be casted to
type double. Then I don't see why the compiler (gcc) complains. The
line should create a (type double) random number between 0 and 1 and
it would be nice to have this warning go away...

There should be nothing wrong with that line; casting an int to double
is perfectly legitimate. I don't get a warning when I compile it
myself. Show us a complete program and the *exact* warning message.
 
R

Richard Heathfield

Martin Jørgensen said:
Hi,

Short question:

Any particular reason for why I'm getting a warning here:
(cast from function call of type int to non-matching type double)


xdouble = (double)rand()/(double)RAND_MAX;

Why are you casting in the first place? Why aren't you doing this:

xdouble = rand() / (RAND_MAX + 1.0);

instead?
 
?

=?ISO-8859-1?Q?Martin_J=F8rgensen?=

Richard said:
Martin Jørgensen said:




Why are you casting in the first place? Why aren't you doing this:

xdouble = rand() / (RAND_MAX + 1.0);

I also do that now, as that don't give any warnings. But I did it
because I thought that I wouldn't get any problem if I divided two type
double-numbers/variables with each other.

So: My idea was: It must be better do do this:

500.0 / 100000.0

Than this:

500 / 100000.0

But the compiler didn't share that logic with me :)


Best regards
Martin Jørgensen
 
J

Jack Klein

Bad idea. I want it to catch "real errors", but not those intentionally
casts I make.


Great (problem solved)! I just don't understand: I won't complain if I
divided two numbers of type double and store the result as a type double
-> therefore the cast. So, rand() returns an integer, which is divided
by a type double. This doesn't give any warning....?

You cannot divide an int by a double in C. The language does not
allow it. When you think you are dividing (or adding, subtracting,
multiplying, and so on) two different arithmetic types, something else
is really happening.

C has conversions of one type to another. Some conversions are
automatic, and will happen automatically if the expression calls for
it. The most common of these are what are called "the usual integer
conversions" in the C standard.

When an expression or subexpression performs a binary operation on two
scalar values of different type, the lesser type is automatically
promoted to the greater type. No cast is required.

So if you have code like this:

double two_thirds;
two_thirds = 2 / 3.0;

....the integer constant '2' is automatically converted to type double
because the divisor is the double constant '3.0'.

Exactly the same thing happens in the expression:

rand() / (double)RAND_MAX;

Both operands are evaluated. The cast on the denominator makes it a
double value. This causes the compiler to automatically convert the
int returned by rand() to a double to do the division.
What exactly is a "non-matching type"?

Well, the type of the return value of rand() is an int. Any type
other than int would be a non-matching type.
 
?

=?ISO-8859-1?Q?Martin_J=F8rgensen?=

Jack said:
On Sun, 18 Jun 2006 07:52:00 +0200, Martin Jørgensen


You cannot divide an int by a double in C. The language does not
allow it. When you think you are dividing (or adding, subtracting,
multiplying, and so on) two different arithmetic types, something else
is really happening.
Ok.

C has conversions of one type to another. Some conversions are
automatic, and will happen automatically if the expression calls for
it. The most common of these are what are called "the usual integer
conversions" in the C standard.

When an expression or subexpression performs a binary operation on two
scalar values of different type, the lesser type is automatically
promoted to the greater type. No cast is required.

Just to make sure it's clear:

When you write "the greater type" what is that then. The type for which
sizeof( double/int ) is the largest? Sizeof(double) = 8 on my system and
sizeof(int) = 4, so suppose (I don't know if it's possible), two
different types were both 4/8 bytes what would the "greater type" be?
So if you have code like this:

double two_thirds;
two_thirds = 2 / 3.0;

...the integer constant '2' is automatically converted to type double
because the divisor is the double constant '3.0'.

I must have remembered incorrect, but I believe that I once tried to do
something where the result was an integer - at least - I wanted to avoid
the result being: two_thirds = 0 (it should be 0,666666667)...
Exactly the same thing happens in the expression:

rand() / (double)RAND_MAX;

Both operands are evaluated. The cast on the denominator makes it a
double value. This causes the compiler to automatically convert the
int returned by rand() to a double to do the division.
Ok.



Well, the type of the return value of rand() is an int. Any type
other than int would be a non-matching type.

Now I get it.... It (the compiler) didn't look at the denominator... It
only looked at the numerator and thought: Hey, you probably
shouldn't/dont want to cast the return type (int) from rand() to type
double as I know it's type int and I would like it to stay that type no
matter what kind of type it's divided by...


Best regards
Martin Jørgensen
 
B

Barry Schwarz

Bad idea. I want it to catch "real errors", but not those intentionally
casts I make.


Great (problem solved)! I just don't understand: I won't complain if I
divided two numbers of type double and store the result as a type double
-> therefore the cast. So, rand() returns an integer, which is divided
by a type double. This doesn't give any warning....?

The integer rand returned is first implicitly converted to double and
the division is performed using two doubles. You are not casting the
return from rand to double. Since an implicit conversion is not a
cast, the bad-function-cast option should not be involved.

Note that the compiler is free to generate any kind of diagnostic it
likes as long as it still generates the correct code for a well
defined expression. It could still generate a warning of the type
"Hey, you are dividing an int by a double and I consider that to be an
operation of dubious value." It's a quality of implementation issue.
What exactly is a "non-matching type"?

Whatever your compiler decides it is.


Remove del for email
 
B

Barry Schwarz

Martin Jørgensen said:


Why are you casting in the first place? Why aren't you doing this:

xdouble = rand() / (RAND_MAX + 1.0);

instead?

Possibly because it gives a different answer.


Remove del for email
 
K

Keith Thompson

Martin Jørgensen said:
Now I get it.... It (the compiler) didn't look at the
denominator... It only looked at the numerator and thought: Hey, you
probably shouldn't/dont want to cast the return type (int) from rand()
to type double as I know it's type int and I would like it to stay
that type no matter what kind of type it's divided by...

Which strikes me as a silly warning. There is nothing wrong with
converting the result of rand() to double.

<OT>
gcc's documentation says:

`-Wbad-function-cast (C only)'
Warn whenever a function call is cast to a non-matching type. For
example, warn if `int malloc()' is cast to `anything *'.

This seems to be intended to catch the error of calling malloc()
without a prototype in scope, an error that gcc is quite capable of
catching directly (the usual message is "warning: implicit declaration
of function `malloc'"). I wouldn't use that option myself.
</OT>
 
R

Richard Heathfield

Barry Schwarz said:
Possibly because it gives a different answer.

I know - but it's almost certainly the answer he actually needs, as opposed
to the one he thinks he needs. :)
 
?

=?ISO-8859-1?Q?Martin_J=F8rgensen?=

Richard said:
Barry Schwarz said:




I know - but it's almost certainly the answer he actually needs, as opposed
to the one he thinks he needs. :)

Damn... I didn't saw the denominator was changed to + 1.0.... Why is
that so? Would this generate a number between [0;1[ ?


Best regards
Martin Jørgensen
 
?

=?ISO-8859-1?Q?Martin_J=F8rgensen?=

Richard said:
Barry Schwarz said: -snip-

I know - but it's almost certainly the answer he actually needs, as opposed
to the one he thinks he needs. :)

Oh.... I see. I have no further questions - there's a lot about this on
google groups, so thanks for the help....


Best regards
Martin Jørgensen
 
R

Richard Heathfield

Martin Jørgensen said:
Richard said:
Barry Schwarz said:






I know - but it's almost certainly the answer he actually needs, as
opposed to the one he thinks he needs. :)

Damn... I didn't saw the denominator was changed to + 1.0.... Why is
that so? Would this generate a number between [0;1[ ?


It would generate a number in the range [0, 1) - that's a half-open
interval, so 0 is included in the range but 1 is not. Multiplying by n
gives you a number in the range 0 to n-1:

r = n * (rand() / (RAND_MAX + 1.0));

Adding 1 gives you a number in the range 1 to n, which is handy for dice and
so on:

d = n * (rand() / (RAND_MAX + 1.0)) + 1;

There are other common uses too, e.g.

int rrnd(int low, int high)
{
if(low > high) { int t = low; low = high; high = t; }
return (high - low + 1) * (rand() / (RAND_MAX + 1.0)) + low;
}

You never need a cast for any of them.
 
C

CBFalconer

Richard said:
Martin Jørgensen said:
.... snip ...
Damn... I didn't saw the denominator was changed to + 1.0....
Why is that so? Would this generate a number between [0;1[ ?

It would generate a number in the range [0, 1) - that's a half-open
interval, so 0 is included in the range but 1 is not. Multiplying
by n gives you a number in the range 0 to n-1:

r = n * (rand() / (RAND_MAX + 1.0));

Adding 1 gives you a number in the range 1 to n, which is handy for
dice and so on:

d = n * (rand() / (RAND_MAX + 1.0)) + 1;

As far as ranges is concerned, many pseudo random generators will
not generate the value 0 at all. In this case dividing by
(double)RAND_MAX will give you a value in the range greater than 0
through 1.0. This may cure (or cause) some statistical anomalies.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,013
Latest member
KatriceSwa

Latest Threads

Top