R
Ralf
Regarding numerical types, in my view, casts fall in one of two
categories:
1. Casts that change the value of an object
2. Casts that are actually redundant, but get rid of compiler/lint
warnings
As an example, consider this code:
unsigned int ui;
...
unsigned char uc = (unsigned char)ui;
Here, it is not clear from the code what the developer wanted to
achieve:
1. It is possible that ui exceeds the unsigned char range and the
programmer only wants to look at the lower (e. g. 8) bits. Basically,
he wants to cut off significant bits and hence wants to change the
original value.
2. The developer knows that ui cannot hold values that exceed the
unsigned char range, so assigning ui to uc is safe and doesn't lose
bits (i. e. the original value is preserved). He only casts to shut up
the compiler/Lint.
Would it make sense to introduce cast macros that clearly indicate what
the programmer wants to do, as in:
#define VALUE_CAST(type, e) ( (type)(e) )
#define WARNING_CAST(type, e) ( (type)(e) )
In the code below the purpose of the cast would be self-explanatory:
unsigned char uc = WARNING_CAST(unsigned char, ui);
Maybe WARNING_CAST could be even augmented by an assert checking if the
source object is in the range of the target type.
Any comments?
categories:
1. Casts that change the value of an object
2. Casts that are actually redundant, but get rid of compiler/lint
warnings
As an example, consider this code:
unsigned int ui;
...
unsigned char uc = (unsigned char)ui;
Here, it is not clear from the code what the developer wanted to
achieve:
1. It is possible that ui exceeds the unsigned char range and the
programmer only wants to look at the lower (e. g. 8) bits. Basically,
he wants to cut off significant bits and hence wants to change the
original value.
2. The developer knows that ui cannot hold values that exceed the
unsigned char range, so assigning ui to uc is safe and doesn't lose
bits (i. e. the original value is preserved). He only casts to shut up
the compiler/Lint.
Would it make sense to introduce cast macros that clearly indicate what
the programmer wants to do, as in:
#define VALUE_CAST(type, e) ( (type)(e) )
#define WARNING_CAST(type, e) ( (type)(e) )
In the code below the purpose of the cast would be self-explanatory:
unsigned char uc = WARNING_CAST(unsigned char, ui);
Maybe WARNING_CAST could be even augmented by an assert checking if the
source object is in the range of the target type.
Any comments?