FAQ: int or long int?

P

pozz

I'm interested to write portable code on embedded 16- or 32-bits
platform.
So I'm wondering which integer type to use between int and long int
when I need to manage values greater than +32767 (but lower than
+2147483647)

The Question 1.1 of comp.lang.c FAQ gives a precise answer: If you
might need large values (above 32,767 or below -32,767), use long.

I imagine this is because int type could be 16-bit on some platforms,
but long must be at least 32-bit.

At the contrary, on some 32-bit platforms, int could be 32-bit and
long int could be 64-bit. In this situation, I will use 64-bit
variables to store values that could be stored just in int variables.
Apart the greater storage space in memory, also the computation will
be more complex and slower.

So I'm wondering if it could be better to use new C99 types, such as
int32_t. In this case, on 32-bit platform with 64-bit long, I'll use
the simple int for my variable...
 
P

pozz

That's what they're there for--to explicitly state the sizes when you need to
explicitly set the sizes. And if your platform doesn't include stdint.h, make
your own.

At the contrary, if I use
int16_t x;
on a 32-bit platform, I'll use a non optimized type...

In both cases (long int or int16_t) I can have some drawbacks.
 
B

Ben Bacarisse

pozz said:
At the contrary, if I use
int16_t x;
on a 32-bit platform, I'll use a non optimized type...

In both cases (long int or int16_t) I can have some drawbacks.

That is what the int_fastN_t types are for (and unlike the intN_t types
they are required by C99).
 
B

Ben Bacarisse

China Blue Meanies said:
For build tools like make it's pretty easy to create the header file and then
include it.

program: int.h program.c
...

int.h:
echo >int.c '#include <stdio.h>'
echo >>int.c '#include <limits.h>'
echo >>int.c 'int main(int n,char **p) {'
echo >>int.c 'char *i8 = 0,*i16 = 0,*i32 = 0,*i64 = 0;'
echo >>int.c '#ifdef SCHAR_MAX'
echo >>int.c 'if (SCHAR_MAX==0x7F) *i8 = "char";'

I think you mean i8 rather than *i8 here (and in 19 similar cases
below).

echo >>int.c 'printf("#define int_h"; puts("");'

missing )

<snip>
 
B

Ben Pfaff

pozz said:
I'm interested to write portable code on embedded 16- or 32-bits
platform.
So I'm wondering which integer type to use between int and long int
when I need to manage values greater than +32767 (but lower than
+2147483647)

int_least32_t if you want to optimize for space.
int_fast32_t if you want to optimize for time.
 
B

Ben Bacarisse

christian.bau said:
Name a platform where long = 64 bit, int = 32 bit, and computations
using long will be more complex and slower than the same computation
using int. In my experience it is the opposite.

At first glance, it seems to be the case on my Linux system:

$ cat time.c
#include <stdio.h>
#include <limits.h>

#define STR(x) XSTR(x)
#define XSTR(x) #x

ITYPE f(ITYPE x, ITYPE y)
{
return (x * y) / (x + y);
}

int main(void)
{
printf("%s is %zu bits\n", STR(ITYPE), CHAR_BIT * sizeof(ITYPE));
ITYPE r = 1;
for (ITYPE i = 0; i < 10000; i++)
for (ITYPE j = 1; j < 10000; j++)
r = (r << 3) | f(i, j);
printf("%d", (int)r);
return 0;
}
$ gcc -DITYPE=int -std=c99 -pedantic -O3 -o time time.c
$ time ./time
int is 32 bits
-65
real 0m0.399s
user 0m0.400s
sys 0m0.000s
$ gcc -DITYPE=long -std=c99 -pedantic -O3 -o time time.c
$ time ./time
long is 64 bits
-65
real 0m1.306s
user 0m1.310s
sys 0m0.000s

I am very wary of crude timing like this, but it's a start.
 
E

Eric Sosman

Name a platform where long = 64 bit, int = 32 bit, and computations
using long will be more complex and slower than the same computation
using int. In my experience it is the opposite.

"All."

Argument: Memory access dominates all other operations in a
system; that is why the system has two or three or maybe even more
levels of very complex, very expensive cache. Those caches can hold
only half as many 64-bit objects as 32-bit objects, roughly speaking,
so if you use a 64-bit datum where a 32- or 16- or 8-bit datum would
have sufficed, you increase the "pressure" on the caches and diminish
their effectiveness. This can slow the system down dramatically: A
2GHz CPU that generates a fresh memory reference every 0.5 ns will
s--t--a--l--l for five hundred to a thousand cycles on each cache
miss. If you reduce the cache hit rate from 99% to 98% you *double*
the likelihood of such stalls; roughly speaking, you cut the rate of
"useful work" by half.

See the highly instructive lecture video by Cliff Click at
<http://www.infoq.com/presentations/click-crash-course-modern-hardware>.
 
I

Ian Collins


Under some conditions. If the data set is small enough there should be
little difference.

The real question probably should have been "Name a 32 bit platform
where int is 32 bit and long is 64bit".
Argument: Memory access dominates all other operations in a
system; that is why the system has two or three or maybe even more
levels of very complex, very expensive cache. Those caches can hold
only half as many 64-bit objects as 32-bit objects, roughly speaking,
so if you use a 64-bit datum where a 32- or 16- or 8-bit datum would
have sufficed, you increase the "pressure" on the caches and diminish
their effectiveness. This can slow the system down dramatically: A
2GHz CPU that generates a fresh memory reference every 0.5 ns will
s--t--a--l--l for five hundred to a thousand cycles on each cache
miss. If you reduce the cache hit rate from 99% to 98% you *double*
the likelihood of such stalls; roughly speaking, you cut the rate of
"useful work" by half.

On my Solaris system, the code Ben posted else-thread runs slightly
faster in 64 bit mode with long rather than int.
 
E

Eric Sosman

[...]
The real question probably should have been "Name a 32 bit platform
where int is 32 bit and long is 64bit".

"Mister Chips."

(In other words, define "32 bit platform" -- without inducing
a tautology.)
 
B

Ben Bacarisse

Ian Collins said:
Under some conditions. If the data set is small enough there should
be little difference.

I don't understand the meaning of "should" here. If the data set is
small, the memory effects that Eric is talking about go away but I don't
see why that "should" mean there will be little difference. Did you
just mean that there may be little difference?

<snip reasoning about memory/cache speeds>
 
I

Ian Collins

I don't understand the meaning of "should" here. If the data set is
small, the memory effects that Eric is talking about go away but I don't
see why that "should" mean there will be little difference. Did you
just mean that there may be little difference?

I would expect using a 64 bit type on a CPU with 64 bit registers and
ALU to be no slower and possibly marginally faster if sign extension is
required.

For example your code on an AMD64 system:

long is 64 bits real 0m1.490s
unsigned is 32 bits real 0m1.480s
int is 32 bits real 0m1.528s

While on an Intel i7 the difference between 32 bit types is in the noise:

long is 64 bits real 0m0.424s
unsigned is 32 bits real 0m0.435s
int is 32 bits real 0m0.433s
 
B

Ben Bacarisse

Ian Collins said:
I would expect using a 64 bit type on a CPU with 64 bit registers and
ALU to be no slower and possibly marginally faster if sign extension
is required.

Yes, I understand that. I know there are systems where there will be
very little difference but it seems there are some where there is a
difference. Hence my not understanding if your "should" is a moral one
(the designer got it wrong) or a statistical one (such systems are
vanishingly rare).
For example your code on an AMD64 system:

long is 64 bits real 0m1.490s
unsigned is 32 bits real 0m1.480s
int is 32 bits real 0m1.528s

While on an Intel i7 the difference between 32 bit types is in the noise:

long is 64 bits real 0m0.424s
unsigned is 32 bits real 0m0.435s
int is 32 bits real 0m0.433s

The times I got are from an Intel(R) Core(TM)2 Duo CPU (P8700). I don't
really know what it's like inside.
 
S

Seebs

Name a platform where long = 64 bit, int = 32 bit, and computations
using long will be more complex and slower than the same computation
using int. In my experience it is the opposite.

Hmm.

I did some messing around with a POWER system once which was natively
64-bit, on which it was noticably slower to use 64-bit values for everything,
because you had to move twice as much data to get anything done. I believe
it did in fact have a 64-bit long, 32-bit int mode, and a mode where both
long and int were 32-bit, and long long was 64-bit, and that the latter mode
was faster if you weren't using a lot of long longs. (But slower if you
were, because it was really running in a 32-bit mode such that long long was
emulated the way it was on old x86, I believe).

-s
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,836
Messages
2,569,748
Members
45,545
Latest member
rapter____0

Latest Threads

Top