a = b*( (double)c/d);
this is a common way to get out of round off error which occurs when an
int is divided by an int . for eg if c=10 and d =3
c/d is 3 not 3.3333 that is the reason the variable c is typecasted to
double , now the entire expression returns a double .. which when
mutliplied with an int will again give u a double ... then finally
again it is type casted to int.
Don't confuse casts with conversions. Casts are explicit operators in the
source code e.g. (double) above is a cast operator. Casts can cause
conversions (and typically do). However not all conversions involve casts
e.g. the conversion of the resul form double to int before it is written
to a is an example.
what i think is it is of waste to convert variable c into double if the
finally u are again typecasting the result into int... i think this will
same as the your equation u wrote if a,b,c,d are all ints a= (b*c)/d eg
if b=3 , c=10 and d=3
then a=10
That's possibly true except:
1. if b*c is greater than can be represented by an int you get undefined
behaviour. Casting to long or in C99 long long may provide adequate
range to avoid this.
2. Given your example values a = b*((double)c/d) gives a= 3*((double)10/3
or a = 3 * 3.33... Now 3.33... isn't representable exactly in
decimal or binary. So 3 * 3.33... may evaluate to 10, something
slightly larger than 10, or something slightly smaller than 10. If it
is slightly smaller then converting it to int will produce 9 as a
result, which is probably not what was wanted. Even if it happens to
work in this case there will be others where it fails.
a = (double)b*c / d would be better in this respect.
Moral: don't use floating point for integer operations unless you really
know what you are doing. Then you have to ask yourself how you could tell
if you know what you are doing.
Lawrence