Use a suffix or a type cast?

H

Hamish M

Hi I am interested in opinions on this topic.

I have heard that a suffix is not a good solution and type casts are
much better

for example.

-----------------------------------------------------------------
#define MAX_UWORD (T_UWORD)65535

or

#define MAX_UWORD 65535u

-------------------------------------------------------------------

Where UWORD is unsigned short int.

What is your opinion? or why would someone have said using a suffix is
no good?

For starters I can see that using the suffix would convert to MAX to
unsigned int and not unsigned short.

Thanks
 
F

Frederick Gotham

Hamish M posted:

What is your opinion? or why would someone have said using a suffix is
no good?


unsigned long: 5UL
long: 5L
unsigned int: 5U (might be unsigned long though!)
int: 5 (might be long though!)
unsigned short: (unsigned short)5
short: (short)5
unsigned char: (unsigned char)5
signed char: (signed char)5
char: (char)5
 
R

Richard Heathfield

Hamish M said:
Hi I am interested in opinions on this topic.

I have heard that a suffix is not a good solution and type casts are
much better

In general, casts are to be avoided. It is rarely correct to use a cast, and
the circumstances in which it /is/ correct are rarely those you would
expect. Suffixes are perfectly adequate to the task for which they are
designed.

What is your opinion? or why would someone have said using a suffix is
no good?

I have no idea why anyone would try to dissuade you from using suffixes.
For starters I can see that using the suffix would convert to MAX to
unsigned int and not unsigned short.

By default, an integer constant has type int, unless it won't fit into an
int, in which case the following rule (3.1.3.2 in C89) applies:

"The type of an integer constant is the first of the corresponding
list in which its value can be represented. Unsuffixed decimal: int,
long int, unsigned long int; unsuffixed octal or hexadecimal: int,
unsigned int, long int, unsigned long int; suffixed by the letter u
or U: unsigned int, unsigned long int; suffixed by the letter l or
L: long int, unsigned long int; suffixed by both the letters u or U
and l or L: unsigned long int ."
 
K

Keith Thompson

Eric Sosman said:
No; always unsigned int.

5U is always unsigned int, but a decimal constant with a "U" suffix
can be any of unsigned int, unsigned long int, or unsigned long long
int (C99 only) depending on its value and the ranges of the types.

In other words, 5U may be unsigned long for sufficiently large values
of 5.
No; always (signed) int.

As above, this can be int, long int, or long long int for sufficiently
large values of 5.

Note that these are likely to be promoted to int or unsigned int
anyway, which is presumably why the language doesn't provide suffixes
for types shorter than int.
 
K

Keith Thompson

Hamish M said:
Hi I am interested in opinions on this topic.

I have heard that a suffix is not a good solution and type casts are
much better

for example.

-----------------------------------------------------------------
#define MAX_UWORD (T_UWORD)65535

or

#define MAX_UWORD 65535u

And how do we know that UWORD is unsigned short int? I believe you
when you say that it is, but it's not obvious to someone reading the
code, and it might be defined as something else in another version of
the program.

A cast lets you specify any integer type you like. A suffix only lets
you specify one of the predefined types.

On the other hand, integer constants are usually implicitly converted
to whatever type is necessary, so it's not usually important for the
constant to be of the exact correct type. It can matter if you're
passing it as an argument to a variadic function, but in that case you
should probably use a cast anyway (on the call, not on the
definition), and you have to keep promotions in mind (you can't
actually pass an unsigned short value to a variadic function).

For an integer constant, the value is usually more important than the
type; the type can be imposed by the context in which it's used.

Finally, if I were going to use a cast in the macro definition, I'd
enclose the whole thing in parentheses. Rather than
#define MAX_UWORD (T_UWORD)65535
I'd write
#define MAX_UWORD ((T_UWORD)65535)
I'm not sure there's any context in which it would matter, but it's
much easier to add the parentheses than to prove they're not
necessary.
 
A

Andrew Poelstra

And how do we know that UWORD is unsigned short int? I believe you
when you say that it is, but it's not obvious to someone reading the
code, and it might be defined as something else in another version of
the program.

A cast lets you specify any integer type you like. A suffix only lets
you specify one of the predefined types.

On the other hand, integer constants are usually implicitly converted
to whatever type is necessary, so it's not usually important for the
constant to be of the exact correct type. It can matter if you're
passing it as an argument to a variadic function, but in that case you
should probably use a cast anyway (on the call, not on the
definition), and you have to keep promotions in mind (you can't
actually pass an unsigned short value to a variadic function).

For an integer constant, the value is usually more important than the
type; the type can be imposed by the context in which it's used.

Finally, if I were going to use a cast in the macro definition, I'd
enclose the whole thing in parentheses. Rather than
#define MAX_UWORD (T_UWORD)65535
I'd write
#define MAX_UWORD ((T_UWORD)65535)
I'm not sure there's any context in which it would matter, but it's
much easier to add the parentheses than to prove they're not
necessary.

printf ("%d\n", (int) sizeof MAX_UWORD);
 
I

Ian Collins

Hamish said:
Hi I am interested in opinions on this topic.

I have heard that a suffix is not a good solution and type casts are
much better

for example.

-----------------------------------------------------------------
#define MAX_UWORD (T_UWORD)65535

or

#define MAX_UWORD 65535u

-------------------------------------------------------------------

Where UWORD is unsigned short int.

What is your opinion? or why would someone have said using a suffix is
no good?

For starters I can see that using the suffix would convert to MAX to
unsigned int and not unsigned short.
As others have said, the conversion is usually implicit when the
constant is used.

If the constants don't have to be compile time, you could simply use

const unsigned short maxUword = 65535;
 
B

Ben C

Hi I am interested in opinions on this topic.

I have heard that a suffix is not a good solution and type casts are
much better

for example.

-----------------------------------------------------------------
#define MAX_UWORD (T_UWORD)65535

or

#define MAX_UWORD 65535u

-------------------------------------------------------------------

Where UWORD is unsigned short int.

What is your opinion? or why would someone have said using a suffix is
no good?

I once worked on a library where we used a lot of doubles, and wrote
constants of the kind:

#define ONE 1.0

We already had a typedef:

typedef double Real;

Then we ported to a machine with fast single precision fp, but slow
software-only doubles, so we changed the typedef:

typedef float Real;

So far so good. But whenever we used the constants, since they were
double precision, we ended up with everything being promoted to double
and a lot of slow software double-precision computation which we didn't
want.

What we needed of course was:

#define ONE 1.0f

But much easier than faffing with macros to try and achieve that was:

#define ONE ((Real) 1.0)

Now all the constants automatically pick up the same type as the
typedef.

So, I'd say, when you want a constant of a type that you want to
typedef, it works well to use a cast in the macro rather than a suffix.
 
M

Michael Mair

Ben said:
I once worked on a library where we used a lot of doubles, and wrote
constants of the kind:

#define ONE 1.0

We already had a typedef:

typedef double Real;

Then we ported to a machine with fast single precision fp, but slow
software-only doubles, so we changed the typedef:

typedef float Real;

So far so good. But whenever we used the constants, since they were
double precision, we ended up with everything being promoted to double
and a lot of slow software double-precision computation which we didn't
want.

What we needed of course was:

#define ONE 1.0f

But much easier than faffing with macros to try and achieve that was:

#define ONE ((Real) 1.0)

Now all the constants automatically pick up the same type as the
typedef.

So, I'd say, when you want a constant of a type that you want to
typedef, it works well to use a cast in the macro rather than a suffix.

The alternative: Whenever you create a typedef for a numeric type,
also provide the appropriate <TYPE>_C macro.

With
typedef float Real;
#define REAL_C(constant) constant##F
your symbolic constant is defined as
#define ONE REAL_C(1.0)
If you need the printf() family, appropriate conversion
and length modifier plus conversion macros should be defined
as well.

Cheers
Michael
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,268
Messages
2,571,095
Members
48,773
Latest member
Kaybee

Latest Threads

Top