David said:
Is there any standard as to what a long and int mean or is it compiler
dependent? Or furthermore architecture dependent?
It's compiler and architecture dependent. C only specifies the minimum
size of a type, not it's actual implemented size. To comply with C, int
needs to be at least 16 bits, long needs to be at least 32 bits and
long long needs to be at least 64 bits. A compiler may decide to
implement int == long == long long == 64 bits on any platform (It's
common nowdays to see 32 bit ints on 8 bit platforms).
I know a short short long int isn't actually a recognised type, but if you
wanted to make sure it was 16-bit, 32-bit or 64-bit etc, can you rely on
these type names alone? If not, is there any way to ensure you get a
64/32/16-bit int when you want one?
I usually use one of 2 methods:
If the data is not arithmetic and I'm only doing bitwise operations on
it then I treat it as a byte array. This is not 100% portable since I
make the assumption that a byte is exactly 8 bits (though this method
does not really depend on the assumption, it's easier with the
assumption).
If the data is arithmetic then choose the largest size to properly
represent the data and use masking to truncate the data.
The real issue of wanting an exact bitlength of a datatype is when you
want to communicate the data with an external program either via
transmitting it or saving it to file. There are lots of threads
discussing this issue but usually under the heading of endianness and
portability. But the advice is still useful here. Read/write the data a
byte at a time using shift operators. This allows you to discard/ignore
the extra bits in the variable if it happens to have more bits than you
intended.