Printing a 64-bit integer

C

Chris McDonald

Strive all you like, but not every programmer is in a position to
choose which compiler to use at their place of work. To those who do
have such a choice, this is an easy thing to forget. Visual C is an
enormously popular implementation, and it doesn't have all this
newfangled C99 stuff in it. No, not even long long int. Those who
wish for their code to be portable would do well to remember that.

All agreed; but those that do strive for portable code should not
have their goals or discussions oppressed by those unable to strive
for those goals. The oft' repeated circular discussions don't really
need repeating.
 
D

Dik T. Winter

>
> Oh, okay. I'm using a gcc that's /less/ than a decade old, and it has
> 32-bit longs.

Indeed, it depends on the hardware. I have access to two machines that do
it differently. Both about 4 years old.
 
S

Stephen Sprunk

I'm using gcc version 4.1.2 on a RedHat Enterprise Linux 5 machine.

For i386 or x86_64? There's a huge difference.
I'm trying to print out a 64-bit integer with the value 6000000000
using the g++ compiler,

g++ is a C++ compiler; if you need assistance with C++, check with
comp.lang.c++. I'm going to pretend you said gcc since the answer is
probably the same and that'd be (at least close to) on topic.
and realized that using the format string "ld" (ell-d) did not work, but
"lld" (ell-ell-d) works. My question is that I thought Unix uses the LP64
data model, so long integers should be 64-bit.

Different UNIX(ish) systems do different things, which is permitted by
the relevant standards. ILP32LL64 is common, and so is I32LP64; ILP64
can be found as well. There's probably a few other odd combinations out
there, though AFAIK IL32LLP64 is Windows-only. If your code is written
correctly, though, it shouldn't matter.
Then why does "ld" not display the 64-bit value correctly?

The first possibility is that you are not using the correct types, since
you appear to be unclear on exactly what types have what sizes for your
particular platform. If you're using Linux/x86, for instance, "long
int" is a 32-bit type and thus %ld cannot print 64-bit values.

A second possibility is that your compiler (gcc) and library (glibc)
disagree about the size of a long int; one might think it's 32-bit and
the other thinks it's 64-bit. This _shouldn't_ happen, and it's
unlikely since it'd break a heck of a lot of other things that would
make your system practically unusable, but in theory it's possible.
Also, in Windows (MS Visual Studio 2008), should I use the format
string "Id" (eye-d) to display 64-bit integers?

Good question; MS doesn't implement C99 and doesn't have long long int,
so exactly what to do with their system is off-topic here.

S
 
D

Dik T. Winter

>
> Then, assuming you are correct (and I have no reason to doubt you),
> somebody decided, on a system that /could/ have 64-bit long ints
> (because if they can do it for long long int they can do it for long
> int), to make long ints 32 bits instead.

But every system can have 64-bit long ints, even if the system itself is an
8-bit system. Heck, long's could even be 512-bits on every system.
Apparently the decision was 64-bit long long's, and long's 32-bit or
64-bit depending on performance.

I think there is still a large amount of code around that uses long's in
preference to int's as the standard integer type. This to allow porting
to systems with 16-bit int's (not all the world is a vAX). And so
performance can be an issue.
 
S

Stephen Sprunk

Richard said:
It makes sense for int to reflect the machine's word size. So int
should be 16 bits on 16-bit systems, 32 bits on 32-bit systems, 64 on
64, and so on. But long int should be significantly long, usefully
long. 512 bits would not be too many.

There are two problems with that:

1. A "64-bit machine" may still be slower on 64-bit operations than
32-bit ones, e.g. amd64 (if only due to code density due to prefixes).

2. If int is 64-bit, that leaves no 32-bit type for those who need one,
unless short is promoted to 32-bit, which leaves a gap at 16 bits.

I32LP64 is the logical choice for "64-bit" machines, and Microsoft's
choice of IL32LLP64 has a certain logic to it. ILP64 doesn't work well
in practice.

S
 
M

Mark L Pappin

Stephen Sprunk said:
2. If int is 64-bit, that leaves no 32-bit type for those who need one,
unless short is promoted to 32-bit, which leaves a gap at 16 bits.

int32_t
int16_t

don't need to correspond to any of the named-with-a-keyword types.

mlp
 
K

Keith Thompson

Mark L Pappin said:
int32_t
int16_t

don't need to correspond to any of the named-with-a-keyword types.

Yes, if the implementation provides extended integer types.

As far as I know, no C implementations actually do so.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,143
Latest member
SterlingLa
Top