compile int as 64 bit on a 64 bit machine?

N

Nobody

I was wondering how to compile an int into 64bit mode in a 64 bit
machine with gcc. Seems by default the int is treated as 4bytes.

Use a system with an Alpha (aXP) CPU. That's the only platform I know of
where "int" is 64 bits (aka ILP64). Every other 64-bit platform learned
from that mistake and uses a 32-bit "int", usually with a 64-bit "long"
(aka LP64), but not Win64, which uses a 32-bit "long" (aka LLP64).

If you need a 64-bit signed integer type, use int64_t. Relying upon any
specific standard type to be 64 bits is a mistake.

It's unlikely that the compiler will let you modify the sizes of
standard types, as that would result in the prototypes for library
functions being wrong.
 
N

Nobody

There are a few compilers which have int = 64 bits on 64 bit hardware
(even though this is a very sensible default),

The problem is that "char" really *has* to be 8-bit, so if it "int" is
64-bit, "short" can be either 16-bit or 32-bit, but the other one ceases
to be available.

Also, C's type promotion rules mean that anything smaller than an "int"
gets promoted to an "int" at the drop of a hat, which makes working with
code which was written assuming a 32-bit "int" a nightmare.

It's much easier to have a 32-bit "int" and a 64-bit "long", so that you
can work with both 32-bit and 64-bit values without excessive promotions.

Although, even that was too much trouble for Windows, which is heavily
tied to the 80386 architecture, so Win64 has a 32-bit "long" as well.
but almost all of them
have not done it that way. So (for instance) for an Intel platform I
think it will be very hard to find a compiler that does what you want.

What use would such a compiler be? You wouldn't be able to use any of the
standard headers; wherever the headers say "int", the corresponding object
code in the libraries will be assuming a 32-bit value. All of the popular
OSes for x86_64 (i.e. Win32, Win64, Linux, MacOSX, *BSD) use a 32-bit "int".
You need ILP64 or SILP64.

IOW, you need the HP (formerly Compaq (formerly DEC)) Alpha architecture.
 
U

user923005

The problem is that "char" really *has* to be 8-bit, so if it "int" is
64-bit, "short" can be either 16-bit or 32-bit, but the other one ceases
to be available.

Also, C's type promotion rules mean that anything smaller than an "int"
gets promoted to an "int" at the drop of a hat, which makes working with
code which was written assuming a 32-bit "int" a nightmare.

It's much easier to have a 32-bit "int" and a 64-bit "long", so that you
can work with both 32-bit and 64-bit values without excessive promotions.

Although, even that was too much trouble for Windows, which is heavily
tied to the 80386 architecture, so Win64 has a 32-bit "long" as well.


What use would such a compiler be? You wouldn't be able to use any of the
standard headers; wherever the headers say "int", the corresponding object
code in the libraries will be assuming a 32-bit value. All of the popular
OSes for x86_64 (i.e. Win32, Win64, Linux, MacOSX, *BSD) use a 32-bit "int".


IOW, you need the HP (formerly Compaq (formerly DEC)) Alpha architecture.

It is not just a function of the hardware, but also a function of the
compiler.

Consider:
Next Cmd: type t.c
#include <stdlib.h>
#include <stdio.h>

int main(void)
{
printf("size of char is %u\n", (unsigned) sizeof(char));
printf("size of short is %u\n", (unsigned) sizeof(short));
printf("size of int is %u\n", (unsigned) sizeof(int));
printf("size of long is %u\n", (unsigned) sizeof(long));
printf("size of long long is %u\n", (unsigned) sizeof(long
long));
return EXIT_SUCCESS;
}
Next Cmd: run t
size of char is 1
size of short is 2
size of int is 4
size of long is 4
size of long long is 8
Next Cmd: sho cpu

ALPHA1, a AlphaStation 200 4/100
Multiprocessing is DISABLED. Uniprocessing synchronization image
loaded.
Minimum multiprocessing revision levels: CPU = 1
 
S

Stephen Sprunk

BGB said:
not strictly the correct intepretation...

more specifically:
stack elements in x86-64 are always (at least) 8 bytes, and the calling
conventions I am aware of (Win64 and SysV) both use 8 byte spots for passing
int arguments (much as on x86 cdecl, 32-bits are still used for passing
short).

Doesn't x86-64 pass the first N arguments (technically, first N integer
and first M floating arguments) to non-variadic functions in registers
and leave an empty space in the stack frame in case the callee needs to
spill them?
similarly, the GPRs are 64 bits, and code really doesn't care if the
argument is passed via RCX instead of ECX, it will still see the value as
expected.
Correct.

'foo' may mess up if bar is negative, since passing -1 may return 4294967295
(0xFFFFFFFF) instead (since 32-bit operations zero-extend the high bits of
GPRs), ...

Nope; AMD specified that 32-bit operations are sign-extended for exactly
this reason. (16-bit operations, though, are zero-extended; Intel
messed up when creating the i386.)
this is, however, not the case with things like structs, where 32-bit and
64-bit integers have different sizes. similar goes for local variables, ...

.... unless the locals are never spilled, perhaps?

S
 
B

BGB / cr88192

Stephen Sprunk said:
Doesn't x86-64 pass the first N arguments (technically, first N integer
and first M floating arguments) to non-variadic functions in registers
and leave an empty space in the stack frame in case the callee needs to
spill them?

this differs between the Win64 and SysV calling conventions (Win64 used
respectively on Windows x64, SysV used on Linux x86-64 and OSX and others).

in Win64, 32 bytes are provided as a spill area, and only 4 arguments are
passed in registers.

in SysV, there is no provided spill area (which personally I think was a bad
design idea, since then the callee needs to provide space to spill into, but
oh well...). SysV may pass 6 integer/pointer and 8 floating-point arguments
(in a disjoint manner), and technically literal passed structs may be
unpacked and passed in registers, but I don't bother with this in my
compiler.

Nope; AMD specified that 32-bit operations are sign-extended for exactly
this reason. (16-bit operations, though, are zero-extended; Intel
messed up when creating the i386.)

I wish it were sign extended...
I checked this recently, and it said these were zero-extended.
I considered making it sign extension (in my interpreter) in contrast to the
Intel docs, but figured that it is possible that compilers might depend on
this zero-extension.

then again, I am only likely to use pre-existing compilers for 32-bit x86,
and my compiler for 64-bit x86, so this may be a justifiable infraction...

... unless the locals are never spilled, perhaps?

only Win64 provides for spills, and the space is 32 bytes regardless of the
size or types of arguments.

SysV does not itself provide for spills, and even then, the stack spots are
likely to remain 64 bits (since 64-bit stack spots are still used for < 64
bit arguments).

hence, stack layout is typically different than struct layout...
 
G

Guest

| I was wondering how to compile an int into 64bit mode in a 64 bit
| machine with gcc. Seems by default the int is treated as 4bytes.
| Thanks in advance!

For each machine type (architecture) a specification exists called an ABI.
This specification defines a number of things about how C is compiled to
binary code on that machine. One very important reason for this is so that
you can link the object code produced from compiling your C source code to
other object code produced for the same machine. An important one is the
size used in stack space to push arguments for function calls. And there
are a LOT of standard library functions, whether for the C standard, or for
standards like POSIX, that make use of int.

You CAN change anything you want. The source for the GNU gcc compiler is
available and you can make the change there (in specification files) if
you really want to make such changes. You will also need to recompile the
various libraries you use so they link together properly and don't mix up
the data because the lengths are not consistent.

The C standard does not define int, or even long, to guarantee it will be
64 bits in size. A program that definitely needs 64 bits, and needs to be
portable, should code long long, or if targeting C99 environments, one of
the <stdint.h> alternatives like int64_t. Anything less is definitely not
portable. And that would rule out the common ABI for x86_64, too. This
is the time to revise that code. And tricks using #define or typedef will
not work for multiple reasons. Also, if you have constant numbers coded
that cannot fit in the x86_64 ABI's 32-bit int, you may in some cases need
to append the "ll" or "LL" suffix on those constants. I've also run into
cases where the lack of a "u" or "U" suffix resulted in an incorrect value
when at the borderline (e.g. between INT_MAX and UINT_MAX).

Get your editor warmed up.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,781
Messages
2,569,619
Members
45,316
Latest member
naturesElixirCBDGummies

Latest Threads

Top