Integer Types

C

C man

Most ppl use int a lot, long some times, short very rarely, and char only
for Strings.

How should I decide which integer type to use?
 
L

Lew Pitcher

Most ppl use int a lot,

because int stores most reasonable values
long some times,

because long stores very large values, and you sometimes need to store very
large values
short very rarely,

usually because there is a semantic or storage need for a short integer
(semantic, to say that the integer value is within a limited range around
0, or storage in that short is sometimes realized in a smaller storage
usage than int is)
and char only for Strings.

Nonsense. programs use char as a really short integer, or a single value of
the implementation's characterset. A char is never used "for strings";
an /array of char/ is explicitly used for "normal" strings, but you weren't
talking about arrays, you were talking about simple data types.
How should I decide which integer type to use?

By the type of integer you need to store in it.

--
Lew Pitcher

Master Codewright & JOAT-in-training | Registered Linux User #112576
http://pitcher.digitalfreehold.ca/ | GPG public key available by request
---------- Slackware - Because I know what I'm doing. ------
 
B

Barry Schwarz

Most ppl use int a lot, long some times, short very rarely, and char only
for Strings.

How should I decide which integer type to use?

Make life easy on yourself. Use int unless you KNOW why you should
use a different integer type.
 
C

C man

Make life easy on yourself. Use int unless you KNOW why you should
use a different integer type.

Thanks.

Wouldn't it just be better though if the sizes of the standard types were
precisely defined?
 
C

C man

No, not all hardware is created equal!

OK, but then all that happens is that some people use typedefs like int16
and int32 and then define these typedefs to be int, short, long, etc.
depending on what machine they're using. Other people just use int etc.
and then their code breaks when they port it to a different machine.
 
I

Ian Collins

C said:
OK, but then all that happens is that some people use typedefs like int16
and int32 and then define these typedefs to be int, short, long, etc.
depending on what machine they're using. Other people just use int etc.
and then their code breaks when they port it to a different machine.

That's one reason why we have standardised fixed width types.

If the code requires fixed width types, the only way to make it portable
is to use them. If they aren't required, don't use them.
 
L

luserXtrog

Well, I forgot to return from main :p

You're covered. You did say "program similar to this".
Another variation would be to cast the result of sizeof
to int.
 
I

Ian Collins

C said:
Well then what should the 64-bit type be on a machine that can support it?

Absent?

Portable applications that require 64 bit support normally have
configure options to select between real and simulated 64 bit types.
They then use functions/macros for 64 bit operations.

The GD library is a good example.
 
I

Ian Collins

C said:
Well then what should the 64-bit type be on a machine that can support it?
Oops, I answered the wrong question...

If a 64 bit type is required, then use one ((u)int64_t).
 
B

Barry Schwarz

C man wrote:
snip


They depend. To see how big are those data types on your
computer+OS+compiler, run a simple program similar to this:

#include <stdio.h>
int main (void) {
   printf ("char: %d byte\n", sizeof (char));
   printf ("short int: %d byte\n", sizeof (short));
   printf ("int: %d byte\n", sizeof (int));
   printf ("long int: %d byte\n", sizeof (long));
   printf ("long long int: %d byte\n", sizeof (long long));

}

If you are going to give advice on how to resolve an issue, it would
be nice if your solution
a - did not invoke undefined behavior
b - addressed the issue to be resolved.

%d requires the corresponding argument to be an int. sizeof evaluates
to a size_t. This is an unsigned type but need not be int.

Without knowing the number of bits in a char, the output from this
program provides no useful information. On a system with 32-bit char,
sizeof(int) may be 1 yet this int will be larger than the one on an 8-
bit system where sizeof int is 2.
 
C

CBFalconer

C said:
.... snip ...

OK, but then all that happens is that some people use typedefs
like int16 and int32 and then define these typedefs to be int,
short, long, etc. depending on what machine they're using. Other
people just use int etc. and then their code breaks when they
port it to a different machine.

It sounds as if they are writing invalid code.
 
K

Keith Thompson

CBFalconer said:
They are. Just read limits.h

I'm fairly sure that's not what he meant. I think he's asking if it
would be better if the standard types had the same sizes for all
implementations (as they do for Java, IIRC). (Yes, Java is off-topic;
that was just an illustrative example.)
 
K

Keith Thompson

Richard Heathfield said:
Better to read the Standard. But the Standard does *not* precisely
define the sizes of the standard types, at least not the ones I
think he means (char * 3, short * 2, int * 2, long * 2, and perhaps
long long * 2); it only specifies *minimum* widths. It leaves
implementations to use any width they like that meets *or* exceeds
those minima.

That's good, as it allows for future improvements in hardware, which
the fixed-width types introduced by C99 do not.

Sure they do. An implementation may support, for example, uint1024_t.
Or is that not what you meant?
 
K

Keith Thompson

Richard Heathfield said:
Keith Thompson said:

Er, no, not really. What I meant - and I am beginning to suspect
that I didn't think it through properly, but let's see where it
goes anyway - was that if you nail your code to an exact-width type
such as those provided by C99, you may have to pay a performance
penalty when moving your code to platforms where a wider type is
available and better suited to the hardware.

Ok, but that's what the intfastN_t and intleastN_t types are for.
 
J

James Kuyper

C said:
Well then what should the 64-bit type be on a machine that can support it?

You're asking the question in a way that doesn't quite make sense, for
several reasons.

1) You don't specify what criteria you want to use for evaluating the
"should". For instance, if your objective is to provide as many
different type sizes as possible for a C90 compiler, then (on a machine
which can support all of these sizes), char=8 bit, short=16 bit, int=32
bit, long=64 bit would seem to make a lot of sense. Some C90 compilers
did just that. However, in order to maintain backwards compatibility
with existing binaries, many compilers chose to keep int==long==32 bits,
and to introduce a new type (with a variety of names) for a 64 bit type
as an extension. C99 allows either approach (and several others, too)
mandating long long as a name for a type with at least 64 bits, and
providing several new type names in <stdint.h> that must also be at
least 64 bits.

2) For realistic situations, the fact that a machine can support a
64-bit type is insufficient information to answer your question. If that
type is a 2's complement type, an implementation which makes that type
available to the user (they don't have to), then that type must, at the
very least, be provided via int64_t, and int_least64_t. If it doesn't
have 2's complement and no padding bits, it must NOT be used for
int64_t. Whether it should be used for int_fast64_t depends upon whether
or not the machine also supports other types with at least 64 bits that
are faster than the 64 bit type - on a 128 bit machines, it's quite
likely that int_fast64_t would be a 128 bit type. Whether or not it
should be used for long long depends upon similar considerations.
Whether or not it must be used for intmax_t depends solely upon whether
the implementation makes any larger types available, regardless of
whether or not those types are faster.

I'm assuming above that a machine has only one 64 bit type of each
signedness. A machine could provide several such types, varying in their
endianess or the way that signed integers are represented. If so, each
of the types mentioned above could be a different 64-bit type.

3) You're assuming that there's only one standard C type name that
refers the 64-bit type. A fully conforming implementation could have
every integral type named by the standard use the same 64-bit type, from
char to intmax_t. I've heard that real machines (Crays, I believe) have
used 32-bit integers for most of the C90 standard types, possibly
excluding the character types, so a pure 64-bit implementation would not
be too extreme of a stretch, I think.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top