compile int as 64 bit on a 64 bit machine?

A

Andy

Hello Guys,

I was wondering how to compile an int into 64bit mode in a 64 bit
machine with gcc. Seems by default the int is treated as 4bytes.
Thanks in advance!

-Andy W.
 
T

Tom St Denis

Hello Guys,

I was wondering how to compile an int into 64bit mode in a 64 bit
machine with gcc. Seems by default the int is treated as 4bytes.
Thanks in advance!

-Andy W.

Why would you want this? And what do you mean by "64-bit machine?"
x86_64? ppc64? sparc?

On x86_64 32-bit types are JUST AS FAST to work with as 64-bit types.
So there is no advantage to always using 64-bit ints. And really if
you want to guarantee you have at least 64-bits use long long.

Tom
 
A

Andy

Why would you want this?  And what do you mean by "64-bit machine?"
x86_64?  ppc64?  sparc?

Sorry for the confusion. The machine is x86_64.
On x86_64 32-bit types are JUST AS FAST to work with as 64-bit types.
So there is no advantage to always using 64-bit ints.  And really if
you want to guarantee you have at least 64-bits use long long.

My code was developed on a machine with 8bytes for int. So I kept
everything in int. Right now I am using a machine which by default
assigns 4bytes for int. So I am wondering how could I compile int as
8bytes without modifying the code. It is a scientific application, so
I really need 64 bit.

Thanks,
A.W.
 
J

James Kuyper

Andy wrote:
....
My code was developed on a machine with 8bytes for int. So I kept
everything in int. Right now I am using a machine which by default
assigns 4bytes for int. So I am wondering how could I compile int as
8bytes without modifying the code. It is a scientific application, so
I really need 64 bit.

So why not switch ti a type guaranteed to have at least 64 bits, such as
long long, int_fast64_t, int_least64_t, or int64_5?
 
A

Andy

Andy wrote:

...


So why not switch ti a type guaranteed to have at least 64 bits, such as
long long, int_fast64_t, int_least64_t, or int64_5?

It should be done at the first place. Right now I learned. is there
anyway to fix it without modifying the code?

Thanks,
A.W.
 
B

BGB / cr88192

Andy wrote:

...


So why not switch ti a type guaranteed to have at least 64 bits, such as
long long, int_fast64_t, int_least64_t, or int64_5?

(stupid google groups messing up IE...).

<--
It should be done at the first place. Right now I learned. is there
anyway to fix it without modifying the code?

Thanks,
A.W.
-->

maybe try:
#define int long long

but, alas, this is still generally an ill-advised strategy.
better would be to go and change the code...
 
J

James Kuyper

Andy said:
It should be done at the first place. Right now I learned. is there
anyway to fix it without modifying the code?

No. As far as I can tell, the closest you can come to doing that is the
-m64, which makes long a 64-bit type.
 
U

user923005

It should be done at the first place. Right now I learned. is there
anyway to fix it without modifying the code?

There are a few compilers which have int = 64 bits on 64 bit hardware
(even though this is a very sensible default), but almost all of them
have not done it that way. So (for instance) for an Intel platform I
think it will be very hard to find a compiler that does what you want.

I suggest that if you have a very specific platform in mind, you might
check for a compiler that has that option. I think it very doubtful
that you will find one.
See (for instance):
http://en.wikipedia.org/wiki/64-bit
http://en.wikipedia.org/wiki/X86-64

You need ILP64 or SILP64.
 
K

Keith Thompson

Andy said:
I was wondering how to compile an int into 64bit mode in a 64 bit
machine with gcc. Seems by default the int is treated as 4bytes.

There may or may not be a way to cause the compiler to use 64 bits
for type int. Consult your gcc documentation.

Note that if int is 64 bits, and char is 8 bits, then short is
probably either 16 bits or 32 bits; either there's no 16-bit
predefined type, or there's no 32-bit predefined type (unless the
implementation provides extended integer types).

But why do you want to do this? If your code thinks int is 64 bits,
it won't be able to interface with any outside code (including
the OS and the standard library) that assumes int is 32 bits --
unless the OS provides 64-bit-int libraries, but it probably doesn't.

If you want a 64-bit integer type, you almost certainly have one;
it's just not called "int". "long long", if you're using gcc in a
mode where it recognizes it, is guaranteed to be at least 64 bits.
"long", like "int", *could* be 64 bits, but is only guaranteed to
be at least 32 bits. int64_t, defined in <stdint.h> or <inttypes.h>
if you have those headers, is guaranteed to be exactly 64 bits.
 
K

Keith Thompson

BGB / cr88192 said:
maybe try:
#define int long long

but, alas, this is still generally an ill-advised strategy.
better would be to go and change the code...

Yes, there are a *lot* of ways this can go wrong. One example:

short int x;

would expand to

short long long x;

And, as I mentioned elsethread, interfacing with outside code,
including the OS and standard library, is going to be difficult
or impossible.

I think it would be much easier to fix the code so it actually uses
long long (or int64_t, or int64least_t, or int64fast_t) where
appropriate.
 
M

Michael Angelo Ravera

Hello Guys,

I was wondering how to compile an int into 64bit mode in a 64 bit
machine with gcc. Seems by default the int is treated as 4bytes.
Thanks in advance!

-Andy W.

Can you define int as _int64, long long, or whatever in the
preprocessor?? -Dint=_int64

I'm not sure that any included library functions will behave very well
with this, but you could try adding it ahead of any code in the module
where you'd like 64-bit ints.

#define int _int64

What's the problem with going through the code and changing the
declarations of the variables that you'd like to be 64-bit? Do you
have external binary structures that need to be written to disk or to
a socket? Is the code some embodiment of a holy writ?

It's called SOFTware for a reason!
 
K

Keith Thompson

Michael Angelo Ravera said:
Can you define int as _int64, long long, or whatever in the
preprocessor?? -Dint=_int64

You can, but you really shouldn't. Straightening out all the
problems this would cause is probably more effort than fixing the
software, and it would still leave you with a mess that's going to
be very difficult to maintain.
I'm not sure that any included library functions will behave very well
with this, but you could try adding it ahead of any code in the module
where you'd like 64-bit ints.

#define int _int64

What's the problem with going through the code and changing the
declarations of the variables that you'd like to be 64-bit? Do you
have external binary structures that need to be written to disk or to
a socket? Is the code some embodiment of a holy writ?

It's called SOFTware for a reason!

The problem is that it's a lot of work.

The code was (unwisely, I'd say) written with the assumption
that int is 64 bits. Apart from that one implicit assumption,
it's probably gone through a great deal of development, testing,
and so forth. If you can get the code to work without changing
it, that's going a lot of pain. Of course writing it without that
assumption would have been a lot better, but that ship has sailed
(except that the rest of us can learn from the OP's mistakes,
or more likely the OP's predecessors' mistakes).

In other words, the OP's question is a perfectly reasonable one, but
the answer is (a) system- and compiler-specific, and (b) quite likely
to be "Sorry, you can't do that".
 
K

Keith Thompson

Kenneth Brody said:
Keith said:
There may or may not be a way to cause the compiler to use 64 bits
for type int. Consult your gcc documentation.

Note that if int is 64 bits, and char is 8 bits, then short is
probably either 16 bits or 32 bits; either there's no 16-bit
predefined type, or there's no 32-bit predefined type (unless the
implementation provides extended integer types).
[...]

Could the implementation "make up" its own types, and have int16_t and
int32_t be 16- and 32-bits, even in the above scenario?

ie:

char = 8 bits
short = 16 bits
something_else = 32 bits
int = 64 bits
and
int16_t = short
int32_t = "something_else"
int64_t = int

Yes, as I mentioned an implementation can provide "extended integer
types" as described in C99 6.2.5, and int32_t et al can be typedefs
for these types.

I'm not aware of any implementation that has done so.
 
K

Keith Thompson

Kenneth Brody said:
Michael Angelo Ravera wrote: [...]
but you could try adding it ahead of any code in the module
where you'd like 64-bit ints.

#define int _int64

One word: printf

[...]

Yes, printf returns an int result, but the above #define wouldn't
necessarily break all printf calls. For example, this:

#define int _int64
if (printf("hello\n" != 6) {
exit(EXIT_FAILURE);
}

should still work.

This is probably a strong argument *against* trying to use the
#define trick. It only replaces occurrences of the keyword "int".
Expressions of type int are not affected; both the result of the
printf call and the constant 6 are still of type int, not _int64.
 
B

BGB / cr88192

Keith Thompson said:
Yes, there are a *lot* of ways this can go wrong. One example:

short int x;

would expand to

short long long x;

simple consequence...

And, as I mentioned elsethread, interfacing with outside code,
including the OS and standard library, is going to be difficult
or impossible.

this depends on where one puts the define, as well as the specific CPU arch.

on x86-64, the impact should be much less than on x86, since stack items and
registers are generally already 64 bits (in plain C, even if the prototypes
have the wrong types, on x86-64 it should still work, although with C++ it
would mess up the mangling).

the major risk area then is with shared structs, which could easily blow up
in ones' face due to a mismatched layout.

I think it would be much easier to fix the code so it actually uses
long long (or int64_t, or int64least_t, or int64fast_t) where
appropriate.

well, as noted, my idea was for a 'possible' option, but as I also noted, it
is an ill-advised strategy...

better is, of course, to fix the code.
 
B

BGB / cr88192

Kenneth Brody said:
You _could_.

You _shouldn't_.

agreed.



You would break every call to a library function which got passed, or
returned, int.

not necessarily...

it will break horribly on 32-bit x86.

on x86-64, it might actually work, given that arguments are still passed in
64 bit stack items and registers, that the otherwise ignored high bits are
part of the value should not matter.

the main issue is then with structs, which may end up with a mismatched
layout (this depends on the particular OS / libraries in use / ...).


so, it can be said, "it may break horribly" or "it will likely break
horribly", but to say "it will break horribly" is incorrect, as there are
some situations where it will work.

One word: printf

as before, not necessarily broken on x86-64.


it is much like the sometimes used practice of handling/working with
variable-argument functions via pointer arithmetic:
on some architectures, this works;
in others, it blows up horribly...

 
K

Keith Thompson

BGB / cr88192 said:
simple consequence...



this depends on where one puts the define, as well as the specific CPU arch.

on x86-64, the impact should be much less than on x86, since stack items and
registers are generally already 64 bits (in plain C, even if the prototypes
have the wrong types, on x86-64 it should still work, although with C++ it
would mess up the mangling).

the major risk area then is with shared structs, which could easily blow up
in ones' face due to a mismatched layout.

And again, the end result would be code that's even more horribly
non-portable than it was before, depending intimately on stack layout
and register usage. Porting the resulting code later to another
system would likely be nearly impossible.
well, as noted, my idea was for a 'possible' option, but as I also noted, it
is an ill-advised strategy...

better is, of course, to fix the code.

I think we're mostly in agreement -- except that the #define trick is
*so* perilous that, quite frankly, it would never have occurred to me
in the first place.
 
U

user923005

Yes, there are a *lot* of ways this can go wrong.  One example:

    short int x;

would expand to

    short long long x;

even more humorous example:

int main()

would expand to:

long long main(void)

and in general, any API call is going to get munged into something
else.

Global search and replace is not a sensible option.
And, as I mentioned elsethread, interfacing with outside code,
including the OS and standard library, is going to be difficult
or impossible.

I think it would be much easier to fix the code so it actually uses
long long (or int64_t, or int64least_t, or int64fast_t) where
appropriate.

Amen.
 
B

BGB / cr88192

Keith Thompson said:
BGB / cr88192 said:
Keith Thompson said:
[...]
maybe try:
#define int long long

but, alas, this is still generally an ill-advised strategy.
better would be to go and change the code...

Yes, there are a *lot* of ways this can go wrong. One example:

short int x;

would expand to

short long long x;

simple consequence...

And, as I mentioned elsethread, interfacing with outside code,
including the OS and standard library, is going to be difficult
or impossible.

this depends on where one puts the define, as well as the specific CPU
arch.

on x86-64, the impact should be much less than on x86, since stack items
and
registers are generally already 64 bits (in plain C, even if the
prototypes
have the wrong types, on x86-64 it should still work, although with C++
it
would mess up the mangling).

the major risk area then is with shared structs, which could easily blow
up
in ones' face due to a mismatched layout.

And again, the end result would be code that's even more horribly
non-portable than it was before, depending intimately on stack layout
and register usage. Porting the resulting code later to another
system would likely be nearly impossible.

in many ways, this is not too much unlike the x86 architecture in general...

years and decades of bit-twiddly and hacks, the full scope of which not
becoming apparent until one attempts to write tools such as interpreters or
emulators...

yet, it is an architecture known and loved by many, so much so that nearly
any alternative is doomed to eventual failure.

granted, I am including x86-64 here, but will note that it is not strictly
'proper' in that it creates backwards compatibility issues by not extending
things in a strictly 'orthodox' manner. however, in this case, it is still a
reasonable price to pay, and it is the option which won out as far as
industry is concerned (even if I do regret the loss of the ability to
natively run 16-bit and DOS apps in 64-bit Windows, and many 32-bit apps
being, sadly, buggy or broken...).
 
B

BGB / cr88192

Kenneth Brody said:
So, 32-bit integers are promoted to 64-bit values when passed to
functions? Even those with "valid" prototypes?

int foo(int bar)
{
return bar;
}

Even with 32-bit integers, it receives and returns 64-bit values?

not strictly the correct intepretation...

more specifically:
stack elements in x86-64 are always (at least) 8 bytes, and the calling
conventions I am aware of (Win64 and SysV) both use 8 byte spots for passing
int arguments (much as on x86 cdecl, 32-bits are still used for passing
short).

similarly, the GPRs are 64 bits, and code really doesn't care if the
argument is passed via RCX instead of ECX, it will still see the value as
expected.

'foo' may mess up if bar is negative, since passing -1 may return 4294967295
(0xFFFFFFFF) instead (since 32-bit operations zero-extend the high bits of
GPRs), ...

Again, I am not familiar enough with the X86-64 platforms to comment, but
does a C compiler for it really take 32-bit ints and promote them to
64-bit values in this case?

it does not promote the values as such, but it uses the same size spots for
passing/returning values, so in many cases the code will not notice...

this is, however, not the case with things like structs, where 32-bit and
64-bit integers have different sizes. similar goes for local variables, ...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,043
Latest member
CannalabsCBDReview

Latest Threads

Top