Sizes of integers

M

Michael Brennan

Hi,
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?
I think that makes it harder to plan which type to use for your program.
For example, if I want to make a portable program which uses,
say, an int type for a counter, and one system uses 16-bit for ints, and
some other system uses 32-bits, then, in order to have my program to run
the same way on all systems I need to use the smallest value, right?
In this case, I could only count up to 16-bits, since otherwise it would
overflow on the system that uses 16-bit ints.

So I don't understand why to have different sizes. If you want your
programs to be portable and run the same on all systems, how do we do
that? By only using the minimum guaranteed size of the integer types?

I think stdint.h solves that, since then you know which size you have on
your type, but thats in C99.

/Michael
 
C

Chris McDonald

Michael Brennan said:
Hi,
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?
I think that makes it harder to plan which type to use for your program.
For example, if I want to make a portable program which uses,
say, an int type for a counter, and one system uses 16-bit for ints, and
some other system uses 32-bits, then, in order to have my program to run
the same way on all systems I need to use the smallest value, right?
In this case, I could only count up to 16-bits, since otherwise it would
overflow on the system that uses 16-bit ints.
So I don't understand why to have different sizes. If you want your
programs to be portable and run the same on all systems, how do we do
that? By only using the minimum guaranteed size of the integer types?
I think stdint.h solves that, since then you know which size you have on
your type, but thats in C99.


The portability, and your decision, doesn't come from the size of a
particular platform's integers, but from the values you wish those
integer variables to hold.

If your counter will only hold integers that may be held in 16-bit
integers, then use int16_t.

If your counter can *ever* overflow a 16-bit integer, then you require
a longer, say 32-bit, integer on all platforms - including any platforms
that offer only 16-bit native integers.

Don't casually use 'int's if there's ever a danger, on any platform.
 
I

Ian Collins

Chris said:
The portability, and your decision, doesn't come from the size of a
particular platform's integers, but from the values you wish those
integer variables to hold.

If your counter will only hold integers that may be held in 16-bit
integers, then use int16_t.
If you are targeting C99, this is probably a good place to use
int_fast16_t and the like. Even for C90, it may be worth defining your
own versions of these types.
 
W

Walter Roberson

I wonder if there is any good reason to let different systems have
different sizes of short, int and long?
I think that makes it harder to plan which type to use for your program.
So I don't understand why to have different sizes. If you want your
programs to be portable and run the same on all systems, how do we do
that? By only using the minimum guaranteed size of the integer types?

Three reasons:

1) Back then, there were systems that didn't use multiples of 8 bits
as their native sizes. There was a time when it looked like 36 bit
words were going to win out.

2) Performance. 32 bit arithmetic had to be synthesized on earlier
systems.

3) According to some of the DSP and embedded systems people
in the newsgroup, there are a bunch of systems these days which
only offer a very limited number of storage sizes (e.g., only 32 bit).
For the kinds of applications those processors are intended for,
the other sizes are not used often enough to make it worth taking up
the die space for them. The less die space, the faster you can clock
the device...
 
R

Richard Heathfield

Michael Brennan said:
Hi,
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?

Yes. I can see no good reason to insist that all systems have the same
sizes. Surely that would stifle innovation. Wouldn't it be grand if ints
had 128 bits? Or 256? Well, you can't insist that all ints are 16 bits
wide, AND have 256 bit ints.

Just write your code so that it doesn't matter how big the types are, saving
only that they meet the minimum specs given in the Standard. The whole
intn_t thing is a move in completely the wrong direction. It's a move
towards the computer domain. Programming should stay in the problem domain.
In the real world, numbers aren't limited to the range -32767 to +32767 or
-2^31-1 to +2^32-1 - they can be yay big. Well, that's what we should aim
for with computers, too.
I think that makes it harder to plan which type to use for your program.

Use int unless you have a good reason to use something different. If your
number will exceed 32000-odd, then you have a good reason to use a long
int.
 
C

Chris McDonald

Richard Heathfield said:
Use int unless you have a good reason to use something different. If your
number will exceed 32000-odd, then you have a good reason to use a long
int.

Unless your long int is only 16 bits wide?
 
R

Richard Heathfield

Chris McDonald said:
Unless your long int is only 16 bits wide?

If it is, you are not using C. In C, long int is guaranteed to be at least
32 bits wide (and can be wider).
 
M

Marc Thrun

Chris said:
Unless your long int is only 16 bits wide?

Then the implementation is no longer conforming to the C standard. The
minimum/maximum value a long must be able to hold is
-2147483647/+2147483647.
 
C

Chris McDonald

Then the implementation is no longer conforming to the C standard. The
minimum/maximum value a long must be able to hold is
-2147483647/+2147483647.

Thanks; I didn't know that.
 
F

Flash Gordon

Walter said:
Three reasons:

1) Back then, there were systems that didn't use multiples of 8 bits
as their native sizes. There was a time when it looked like 36 bit
words were going to win out.

2) Performance. 32 bit arithmetic had to be synthesized on earlier
systems.

3) According to some of the DSP and embedded systems people
in the newsgroup, there are a bunch of systems these days which
only offer a very limited number of storage sizes (e.g., only 32 bit).
For the kinds of applications those processors are intended for,
the other sizes are not used often enough to make it worth taking up
the die space for them. The less die space, the faster you can clock
the device...

4) The last time I looked in the DSP world there were processors around
which used 24/48 bit words. Although a multiple of 8 not a power of 2 ;-)

5) There are now 64 bit processors and types could now be implemented as
char 8 bits
short 16 bits
int 32 bits
long 64 bits
long long (C99) 128 bits
 
M

Malcolm

Chris McDonald said:
Thanks; I didn't know that.
Unfortunately I've had embedded C compilers where an int was 8 bits, and a
long 16 bits. Not conforming, but I couldn't exactly send it back to the
factory and demand a fixed one.
 
M

Malcolm

Richard Heathfield said:
Michael Brennan said:


Yes. I can see no good reason to insist that all systems have the same
sizes. Surely that would stifle innovation. Wouldn't it be grand if ints
had 128 bits? Or 256? Well, you can't insist that all ints are 16 bits
wide, AND have 256 bit ints.
What would you count in a 256-bit int?
Just write your code so that it doesn't matter how big the types are,
saving
only that they meet the minimum specs given in the Standard. The whole
intn_t thing is a move in completely the wrong direction. It's a move
towards the computer domain. Programming should stay in the problem
domain.
In the real world, numbers aren't limited to the range -32767 to +32767 or
-2^31-1 to +2^32-1 - they can be yay big. Well, that's what we should aim
for with computers, too.
My Basic interpreter (see website) has two data types, numbers and strings.
That's fine for most programming, provided you don't care about efficiency.
If you want to work out interest payments for a million bank customers,
there's no problem even on a £300 computer. If you want to run a 3d shooter,
then my Basic won't be fast enough.

However some numbers are naturally integers. So it is nice to mark them.

Once you start going down that path, however, natural data types multiply.
Dates, colours, angles, proportions, complex numbers, points, error codes,
all need their own types. There is an argument for allowing this, but it
does put a burden on the user.

When you add efficency considerations into the mix, the user's burden
increases even more. For instance I used to be always rewriting graphics
routines to take floats instead of double, or fixed point instead of float,
depending on the particualar platform I was using.
Use int unless you have a good reason to use something different. If your
number will exceed 32000-odd, then you have a good reason to use a long
int.
If you are writing a payroll program, it is conceivable that the program
will have to run on a 16-bit machine. It is also conceivable that the
customer will have more than 32767 employees on his payroll. However it is
not possible that a customer with over thirty thousand employees will want
to run his payroll on a 16 bit machine. So it is quite ok to use ints to
index into the employee list.
 
S

Stephen Sprunk

Malcolm said:
If you are writing a payroll program, it is conceivable that the program
will have to run on a 16-bit machine. It is also conceivable that the
customer will have more than 32767 employees on his payroll. However
it is not possible that a customer with over thirty thousand employees
will want to run his payroll on a 16 bit machine. So it is quite ok to use
ints to index into the employee list.

Well, there's no guarantee that just because the machine happens to be
"32-bit" or "64-bit" that the C implementation uses anything larger than a
16-bit int. Lots of folks ran into that with MS compilers on 386s in the
late DOS/early Windows years.

Amusing anecdote: I worked at a startup which was purchased; for tax
reasons, our stock options were paid out as a payroll bonus. The payroll
system used long ints to store/manipulate the number of cents for each item.
The founder/CEO was to be paid roughly $170 million -- and every time they
tried to do a payroll run, the system crashed because that overflowed a long
int. We didn't get paid for weeks while the vendor scrambled to
recode/recompile the application using long long ints.

S

--
Stephen Sprunk "Stupid people surround themselves with smart
CCIE #3723 people. Smart people surround themselves with
K5SSS smart people who disagree with them." --Aaron Sorkin


*** ***
 
R

Richard Heathfield

Malcolm said:
What would you count in a 256-bit int?

The obvious uses that spring immediately to mind are Diffie-Hellman exchange
and RSA, although it has to be said that 256 bits probably wouldn't be
enough, at least not without some faffing around. Just not quite so much
faffing around as we currently have to do, that's all.
 
T

those who know me have no need of my name

in comp.lang.c i read:

[long must be at least 32 bits, including sign]
Unfortunately I've had embedded C compilers where an int was 8 bits, and a
long 16 bits. Not conforming, but I couldn't exactly send it back to the
factory and demand a fixed one.

the key being that is isn't a c compiler, it is a looks-like-c compiler,
which are indeed quite common for small (embedded) devices. something
similar is common even for hosted implementations of "larger" devices,
e.g., gcc is not a c compiler by default (it is a gnu-c compiler).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top