Printing a 64-bit integer

L

lancer6238

Hi all,

I'm using gcc version 4.1.2 on a RedHat Enterprise Linux 5 machine.
I'm trying to print out a 64-bit integer with the value 6000000000
using the g++ compiler, and realized that using the format string
"ld" (ell-d) did not work, but "lld" (ell-ell-d) works. My question is
that I thought Unix uses the LP64 data model, so long integers should
be 64-bit. Then why does "ld" not display the 64-bit value correctly?

Also, in Windows (MS Visual Studio 2008), should I use the format
string "Id" (eye-d) to display 64-bit integers?

Thank you.

Regards,
Rayne
 
R

Rayne

In




You'd have thought so, wouldn't you? But some gcc nitwit decided to
keep longs at 32 bits and use "long long int" for 64 bit integers. No
doubt 128-bit integers will be long long long int, and so on. Daft.

But why is the output "8" for both sizeof(long) and sizeof(long long)
on my system?
 
A

Alan Curry

Hi all,

I'm using gcc version 4.1.2 on a RedHat Enterprise Linux 5 machine.
I'm trying to print out a 64-bit integer with the value 6000000000
using the g++ compiler, and realized that using the format string
"ld" (ell-d) did not work, but "lld" (ell-ell-d) works. My question is
that I thought Unix uses the LP64 data model, so long integers should
be 64-bit. Then why does "ld" not display the 64-bit value correctly?

"Unix uses the LP64 data model" is something you have taken out of context
and over-generalized. It doesn't apply to 32-bit architectures. It's useful
for contrasting the behavior of 64-bit unix systems (for example Linux/amd64
which is LP64) with the behavior of other systems (for example 64-bit
Microsoft stuff?)

If you thought you were working in a completely 64-bit environment when you
compiled your test program, you should rethink that assumption.

Or stop letting your current platform influence your type selection and use a
type that is guaranteed to be suitable on every platform - if bare "long long"
isn't it, then something from <inttypes.h> probably is.
 
P

Phil Carmody

Richard Heathfield said:
In
<508b7147-0a3c-4862-885c-d897754e44c8@q40g2000prh.googlegroups.com>,


You'd have thought so, wouldn't you? But some gcc nitwit decided to
keep longs at 32 bits

That's highly false. I've had 64-bit longs for over a decade in gcc.

But don't let that get in the way of one of your favourite rants.

Phil
 
C

Chris McDonald

Phil Carmody said:
That's highly false. I've had 64-bit longs for over a decade in gcc.
But don't let that get in the way of one of your favourite rants.


Agreed!
Didn't we have this "discussion" in c.l.c a month of so back?

Didn't the discussion centre around C's definition of <inttypes.h>
and, thus, the availablity to format 64-bit ints with PRIi64 ?

Or was that discussion hijacked, too, by someone stating that there's
a reindeer herder in Lapland that is still using only C89?
 
K

Keith Thompson

Rayne said:
But why is the output "8" for both sizeof(long) and sizeof(long long)
on my system?

Because long and long long are both 8 bytes on your system.

gcc generally uses 4 bytes for longs on 32-bit systems and 8 bytes for
longs on 64-bit systems.

Which makes it hard to tell just what problem you're running into.

You say you have a 64-bit integer. What is its type? If it's long,
you should use "%ld". If it's long long, you should use "%lld". (If
long and long long are both the same size you might be able to get
away with a different format string, but you shouldn't.)

Can you show us some actual code (a small complete program) that
illustrates the problem, the output it produces, and how that differs
from what you expected?
 
K

Keith Thompson

Richard Heathfield said:
In


No idea. More to the point, what is the value of LONG_MAX?

Given the system he's using, it's very unlikely that either long or
long long has any padding bits. If sizeof(long) == 8, then LONG_MAX
is almost certainly 2**63-1; likewise for long long.

Rayne, you said you're using g++. That's a C++ compiler. If you're
asking questions about C, you should use gcc, the C compiler.
 
P

Phil Carmody

Richard Heathfield said:
Oh, okay. I'm using a gcc that's /less/ than a decade old, and it has
32-bit longs.

You're probably using a descendent of a 1970s 8-bit architecture,
rather than an architecture which was designed as a 64-bit one from
scratch.
I accept that you have a counter-example. Nevertheless, I still have
an example.

One example does prove an absolute, it merely supports it until one
single counter-example comes along.

To show ignorance of what gcc has done _and_ to call them "nitwits"
for something which they haven't done really was a slip. Please
don't do it again. Whilst there's much talk of supporting standards
for portability reasons, it seems that many who pay lip-service to
such concerns have limited experience of actually having to be
portable. Such experience tends to curtail incorrect jumping to
assumptions such as the above.

Phil
 
M

Michael Tsang

Phil said:
That's highly false. I've had 64-bit longs for over a decade in gcc.

But don't let that get in the way of one of your favourite rants.

Phil

I have longs same as void*s in gcc.
 
K

Keith Thompson

Phil Carmody said:
You're probably using a descendent of a 1970s 8-bit architecture,
rather than an architecture which was designed as a 64-bit one from
scratch.
[...]

There is at least one descendant of a 1970s 8-bit architecture that's
a 64-bit system, on which gcc uses 64-bit longs (x86-64).

I think Richard's complaint is that the gcc developers chose to keep
longs at 32 bits on 32-bit architectures, rather than making int 32
bits and long 64 bits.
 
G

Guest

| In
| <508b7147-0a3c-4862-885c-d897754e44c8@q40g2000prh.googlegroups.com>,
| (e-mail address removed) wrote:
|
|> Hi all,
|>
|> I'm using gcc version 4.1.2 on a RedHat Enterprise Linux 5 machine.
|> I'm trying to print out a 64-bit integer with the value 6000000000
|> using the g++ compiler, and realized that using the format string
|> "ld" (ell-d) did not work, but "lld" (ell-ell-d) works. My question
|> is that I thought Unix uses the LP64 data model, so long integers
|> should be 64-bit.
|
| You'd have thought so, wouldn't you? But some gcc nitwit decided to
| keep longs at 32 bits and use "long long int" for 64 bit integers. No
| doubt 128-bit integers will be long long long int, and so on. Daft.

We SHOULD have had long long be 128-bit, too, on a 64-bit machine, using
the same simulation trick that was used to make 64-bit long long on a
32-bit machine. But no. And not only that, but they didn't even put in
a uint128_t, either. Probably because someone would whine about their
long variables becoming longer and their long long variables becoming
longer longer, and using too much space.

We should at least have a choice of data model.

And don't get me going on how the screwed up the 64-bit off_t. But that
is more of a Posix thing and not a C thing.


|> Then why does "ld" not display the 64-bit value correctly?
|
| %ld is for long ints. %lld is for long long ints.

And you have to guess, or cast to the largest, to format a uint64_t.
 
G

Guest

| In article <508b7147-0a3c-4862-885c-d897754e44c8@q40g2000prh.googlegroups.com>,
|>Hi all,
|>
|>I'm using gcc version 4.1.2 on a RedHat Enterprise Linux 5 machine.
|>I'm trying to print out a 64-bit integer with the value 6000000000
|>using the g++ compiler, and realized that using the format string
|>"ld" (ell-d) did not work, but "lld" (ell-ell-d) works. My question is
|>that I thought Unix uses the LP64 data model, so long integers should
|>be 64-bit. Then why does "ld" not display the 64-bit value correctly?
|
| "Unix uses the LP64 data model" is something you have taken out of context
| and over-generalized. It doesn't apply to 32-bit architectures. It's useful
| for contrasting the behavior of 64-bit unix systems (for example Linux/amd64
| which is LP64) with the behavior of other systems (for example 64-bit
| Microsoft stuff?)

It could still be done on 32-bit architectures. It would generate some
nasty code to do that, especially with pointers (check that the upper half
is 0 before loading the lower half).


| Or stop letting your current platform influence your type selection and use a
| type that is guaranteed to be suitable on every platform - if bare "long long"
| isn't it, then something from <inttypes.h> probably is.

Agreed. If I need 64 bit, I use int64_t or uint64_t, to be sure I get what
what I expect. I could use long long or unsigned long long. But on some
platform that might actually end up being 128 bits, some day.
 
K

Keith Thompson

Richard Heathfield said:
One nitwit, in all likelihood. Mostly I have enormous respect for the
gcc crew, but the decision to keep long int at 32 bits is most
strange, and I'm glad they later changed that decision.

I'm not aware of any change of decision. As I understand it, gcc
consistently uses 32-bit long on 32-bit systems, and 64-bit long
on 64-bit systems.

One interesting note: On every system I've ever used,
sizeof(long)==sizeof(void*). I do not, of course, claim that this
is a general rule.
 
T

Tom St Denis

Hi all,

I'm using gcc version 4.1.2 on a RedHat Enterprise Linux 5 machine.
I'm trying to print out a 64-bit integer with the value 6000000000
using the g++ compiler, and realized that using the format string
"ld" (ell-d) did not work, but "lld" (ell-ell-d) works. My question is
that I thought Unix uses the LP64 data model, so long integers should
be 64-bit. Then why does "ld" not display the 64-bit value correctly?

Also, in Windows (MS Visual Studio 2008), should I use the format
string "Id" (eye-d) to display 64-bit integers?

It kinda makes sense to force %lld for printing 64-bit values since it
makes programs a bit more portable.

Generally, even on 64-bit hosts if I intend to use a 64-bit value I
use long long. That way the same program on a 32-bit host will work
just fine. If I run into a performance issue [say long long being 128-
bits which is never the case with GCC] I'd just #ifdef around a
typedef. I say "I'd" because I have yet to see a platform where "long
long" *isn't* 64-bits.

Tom
 
O

Old Wolf

It kinda makes sense to force %lld for printing 64-bit values since it
makes programs a bit more portable.

How so? Surely it'd be better to simply use the type
specifier that corresponds to the name of the type
of the variable.
Generally, even on 64-bit hosts if I intend to use a 64-bit value I
use long long.  That way the same program on a 32-bit host will work
just fine.  If I run into a performance issue [say long long being 128-
bits which is never the case with GCC] I'd just #ifdef around a
typedef. I say "I'd" because I have yet to see a platform where "long
long" *isn't* 64-bits.

If you intend a 64-bit value then you should use int64_t.
No ifdefs etc. required.
 
T

Tom St Denis

How so? Surely it'd be better to simply use the type
specifier that corresponds to the name of the type
of the variable.

Because the same app on a 32-bit box will work out of the box.
If you intend a 64-bit value then you should use int64_t.
No ifdefs etc. required.

This is where I explain to you that not all compilers are C99
compliant and that "long long" was present in compilers going [afaik]
to the early 90s and probably earlier. And that they're sadly still
used in some spots.

Tom
 
B

BGB / cr88192

Chris McDonald said:
Agreed!
Didn't we have this "discussion" in c.l.c a month of so back?

Didn't the discussion centre around C's definition of <inttypes.h>
and, thus, the availablity to format 64-bit ints with PRIi64 ?

Or was that discussion hijacked, too, by someone stating that there's
a reindeer herder in Lapland that is still using only C89?

some of us use MSVC, which does not have <inttypes.h>, nor, for that matter,
does it have <complex.h>...


 
B

BGB / cr88192

Keith Thompson said:
I'm not aware of any change of decision. As I understand it, gcc
consistently uses 32-bit long on 32-bit systems, and 64-bit long
on 64-bit systems.

One interesting note: On every system I've ever used,
sizeof(long)==sizeof(void*). I do not, of course, claim that this
is a general rule.

on Win64 with MSVC, this does not hold...

this is because long is still 32 bits, and one needs long long...

oh yes, and there is also the type name: DWORD64...
 
C

Chris McDonald

BGB / cr88192 said:
some of us use MSVC, which does not have <inttypes.h>, nor, for that matter,
does it have <complex.h>...


Undoubtedly; which returns us to the circular discussion of "what is C?", and
shouldn't we strive to move to a more recent implementation of the language?
 
B

BGB / cr88192

Tom St Denis said:
Because the same app on a 32-bit box will work out of the box.

one thing I have done:
a function is used which does little more than convert long or long-long
values to temporary strings (and return a pointer to said temp string).

this way one does not need to rely on format specifiers which may or may not
be present on the compiler...
If you intend a 64-bit value then you should use int64_t.
No ifdefs etc. required.

This is where I explain to you that not all compilers are C99
compliant and that "long long" was present in compilers going [afaik]
to the early 90s and probably earlier. And that they're sadly still
used in some spots.

MSVC...


my nice little usage of such things as 'inttypes.h' and 'complex.h' blew up,
and so instead I had to strip out these types and instead go over to my
internal types (..., s32, u32, s64, u64, ...).

similarly, my use of complexes has broke, forcing a sudden need to convert
all complexes to a struct based type.


more recently, I made essentially an adaptation of 'complex.h' (I call them
'fcomplex.h' and 'dcomplex.h'...), which operate in terms of these structs.

originally, I had intended to use SIMD, but for internal reasons just stuck
with the structs.
(partially, me noting how most complex operations work, I am not certain
there would really be that much benefit from SIMD anyways...).

checking:
hmm... it seems GCC already has 'fcomplex.h', may need a different name...

'fcplx.h' seems free...


relatedly, similarly I do 128-bit integers (in statically compiled land) via
structs as well at present.
however, I have noted that on Win64, it does not matter so much, since the
Win64 calling convention does not pass SIMD types in registers, or even
directly on the stack, instead passing SIMD types via pointers, at which
point I may as well just use the struct...

actually, it makes its own sort of sense in a way (one less thing to cause
potential complex stack alignment issues, ...).

or such...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top