stdint.h library.

Z

ZikO

Hi

Once I gave the solution for one problem in my native group I have been
told to avoid varaibles:

char
unsigned char
short int
unsigned short int
int
unsigned int
long long int
unsigned long long int

and to use the key words from the stdint.h library instead, which are:

int8_t
uint8_t
int16_t
uint16_t
int32_t
uint32_t
int64_t
uint64_t


The reason was that codes with unsigned long long, unsigned long etc
would not be portable between compilers especially on computers with 32
bit and 64 bit architecture. Is that true? I do not have possibility to
check it out as I have still comp of 32 bit architecture (I know it's a
shame :p )

Regards.
 
A

Alf P. Steinbach

* ZikO:
Hi

Once I gave the solution for one problem in my native group I have been
told to avoid varaibles:

char
unsigned char
short int
unsigned short int
int
unsigned int
long long int
unsigned long long int

and to use the key words from the stdint.h library instead, which are:

int8_t
uint8_t
int16_t
uint16_t
int32_t
uint32_t
int64_t
uint64_t

Possibly you have misunderstood the advice.

The [stdint.h] types are useful in some cases.

Using them exclusively reduces portability greatly, using them appropriately
(when you need types of certain or certain minimum bit widths) increases
portability.

The reason was that codes with unsigned long long, unsigned long etc
would not be portable between compilers especially on computers with 32
bit and 64 bit architecture. Is that true?

No. It depends on what the types are used for.

I do not have possibility to
check it out as I have still comp of 32 bit architecture (I know it's a
shame :p )


Cheers & hth.,

- Alf
 
Z

ZikO

Alf said:
Possibly you have misunderstood the advice.

Possibly, but from the context of that strong suggestions I clearly
understand, I believe I do :p, I should get rid of any build-in
variables for its "awesome" counterpart.
The [stdint.h] types are useful in some cases.
Using them exclusively reduces portability greatly, using them
appropriately (when you need types of certain or certain minimum bit
widths) increases portability.

That's want I wanted to know =)
No. It depends on what the types are used for.

So this is actually the same idea as being said above.

Regards.
 
M

Michael Tsang

ZikO said:
Hi

Once I gave the solution for one problem in my native group I have been
told to avoid varaibles:

char
unsigned char
short int
unsigned short int
int
unsigned int
long long int
unsigned long long int

and to use the key words from the stdint.h library instead, which are:

int8_t
uint8_t
int16_t
uint16_t
int32_t
uint32_t
int64_t
uint64_t


The reason was that codes with unsigned long long, unsigned long etc
would not be portable between compilers especially on computers with 32
bit and 64 bit architecture. Is that true? I do not have possibility to
check it out as I have still comp of 32 bit architecture (I know it's a
shame :p )

Regards.
There is a C++ version of stdint.h called cstdint. It is in C++0x standard
and requires a C++0x compiler.
 
S

SG

Hi

Once I gave the solution for one problem in my native group I have been
told to avoid varaibles:

char
unsigned char

Actually, there are three different character types:

char
unsigned char
signed char

where "char" has an implementation-defined "signedness".

As long as you're OK with the minimum guarantees the integer types
offer you're free to use them.
( see http://home.att.net/~jackklein/c/inttypes.html#limits )

"long long" is a C99 extension. The official C++ standard doesn't yet
support it. But most compilers do.
and to use the key words from the stdint.h library instead, which are:

int8_t
uint8_t
int16_t
uint16_t
int32_t
uint32_t
int64_t
uint64_t

If you really need an integer with some specific length you should use
these types, yes. Who knows, one some machine in the future "long
long" might be a 128 bit integer. When all you need is a 64 bit int
you should prefer "int64_t" to "long long", IMHO.

Also, I'd like to mention that the C++ standard doesn't require two's
complement for signed numbers. It allows one's complement, two's
complement and sign/magnitude. So, bit manipulation on signed numbers
is also not portable.

Cheers!
SG
 
I

Ian Collins

ZikO said:
Hi

Once I gave the solution for one problem in my native group I have been
told to avoid varaibles:

char
unsigned char
short int
unsigned short int
int
unsigned int
long long int
unsigned long long int

and to use the key words from the stdint.h library instead, which are:

Alf gave a good answer, but one point everyone has missed: stdint.h
isn't a library, it is a header.
The reason was that codes with unsigned long long, unsigned long etc
would not be portable between compilers especially on computers with 32
bit and 64 bit architecture. Is that true?

Only if the representation of the type matters.
 
J

James Kanze

ZikO wrote:

Not if the code is correctly written. Just the opposite is
true, in fact---some machines don't support things like int32_t.

You should only use one of the above types when the code won't
work unless the type is exactly n bits.
There is a C++ version of stdint.h called cstdint. It is in
C++0x standard and requires a C++0x compiler.

There will be a C++ version of stdint.h, in the next version of
the standard. There isn't one yet (but some compilers probably
already provide it).
 
J

James Kanze

If you really need an integer with some specific length you
should use these types, yes. Who knows, one some machine in
the future "long long" might be a 128 bit integer. When all
you need is a 64 bit int you should prefer "int64_t" to "long
long", IMHO.

If it would cause problems if the type had more bits that 64,
yes, you should prefer int64_t. But I'd start by asking why it
would cause problems.

The "standard" type for an integral value is int. IMHO, you
should use this unless there is a real reason not too.
Also, I'd like to mention that the C++ standard doesn't
require two's complement for signed numbers. It allows one's
complement, two's complement and sign/magnitude. So, bit
manipulation on signed numbers is also not portable.

The standard does require that the above types be two's
complement (the signed ones, anyway). But it doesn't require
them to be present if they can't be supported directly by the
hardware.

Your comment concerning bit manipulation is correct, however;
doing bit manipulation is one real reason not to use int
 
B

Bo Persson

ZikO said:
Alf said:
Possibly you have misunderstood the advice.

Possibly, but from the context of that strong suggestions I clearly
understand, I believe I do :p, I should get rid of any build-in
variables for its "awesome" counterpart.
The [stdint.h] types are useful in some cases.
Using them exclusively reduces portability greatly, using them
appropriately (when you need types of certain or certain minimum
bit widths) increases portability.

That's want I wanted to know =)
No. It depends on what the types are used for.

So this is actually the same idea as being said above.

As always, you have to consider the use of your variables. For
example, one difference between the built in types and the typedefs is
that int64_t is *exactly* 64 bits wide, while long long is *at least*
64 bits wide.

So you should use int64_t when the exact width is important, and long
long when the minimum range is important.


There is no single right answer. :)


Bo Persson
 
J

jacob navia

James said:
Not if the code is correctly written. Just the opposite is
true, in fact---some machines don't support things like int32_t.

What a nonsense!

int a = 45765;

Is that OK?

Well, it depends if "int" has at least 17 bits :)

(leaving 1 for sign)

If not you are getting RUBBISH.

So, the number of bits in an "int" IS very important
mind you.

Not for you obviously.

You never use numbers bigger than 32767.
 
B

Bo Persson

jacob said:
What a nonsense!

int a = 45765;

Is that OK?

Well, it depends if "int" has at least 17 bits :)

(leaving 1 for sign)

If not you are getting RUBBISH.

So, the number of bits in an "int" IS very important
mind you.

Not for you obviously.

You never use numbers bigger than 32767.

The thing is that some machines don't support int32_t, because that
requires 32 bit 2's complement operations. What if the hardware is 36
bit 1's complement?

http://unisys.com/products/mainframes/index.htm


Bo Persson
 
J

James Kanze

James Kanze wrote:
What a nonsense!

ISO/IEC 9899:1999, section 7.18.1.1:
The typedef name intN_t designates a signed integer type
with width N, no padding bits, and a two’s complement
representation. Thus, int8_t denotes a signed integer
type with a width of exactly 8 bits.

The typedef name uintN_t designates an unsigned integer
type with width N. Thus, uint24_t denotes an unsigned
integer type with a width of exactly 24 bits.

These types are optional. However, if an implementation
provides integer types with widths of 8, 16, 32, or 64
bits, it shall define the corresponding typedef names.

What exactly don't you understand about "These types are
optional". (For what it's worth, I know of at least two
implementations which don't support int32_t. Because there
is no 32 bit type, and even if there was, it isn't twos
complement.)
int a = 45765;
Is that OK?
Well, it depends if "int" has at least 17 bits :)

It depends on your portability requirements. If you need to
support machines with ints which might be less than it's not
OK. If you need an integral type which contains values
larger than INT_MAX, then you're in a special case where you
need to use something other than int (but almost certainly
not int32_t).
(leaving 1 for sign)
If not you are getting RUBBISH.
So, the number of bits in an "int" IS very important mind
you.
Not for you obviously.
You never use numbers bigger than 32767.

Not in an int, in code which has to run on machines with
less than 17 bit ints.

The usual solution is to provide you're own typedefs.
Something along the lines of:

#if INT_MAX >= 100000
typedef int MyCounter ;
#else
typedef long MyCounter ;
#endif

(except, of course, you probably wouldn't use conditional
compilation, but different header files). The number of
types concerned shouldn't usually be very large.

A lot of applications I've seen also use long systematically
if they need values larger than 2^15.

The C standard provides things like int_least32_t and
int_fast32_t, which were designed for this sort of thing
(hidden behind a typedef, of course). In practice, I've
never seen them used---I rather suspect that most people had
solved the problem as above before they were introduced, and
didn't find it worth changing.
 
J

jacob navia

James said:
ISO/IEC 9899:1999, section 7.18.1.1:
The typedef name intN_t designates a signed integer type
with width N, no padding bits, and a two’s complement
representation. Thus, int8_t denotes a signed integer
type with a width of exactly 8 bits.

The typedef name uintN_t designates an unsigned integer
type with width N. Thus, uint24_t denotes an unsigned
integer type with a width of exactly 24 bits.

These types are optional. However, if an implementation
provides integer types with widths of 8, 16, 32, or 64
bits, it shall define the corresponding typedef names.

What exactly don't you understand about "These types are
optional". (For what it's worth, I know of at least two
implementations which don't support int32_t. Because there
is no 32 bit type, and even if there was, it isn't twos
complement.)

Now, take a deep breath and realize that I am not discussing that
those types are optional. I have never said they are
required by all implementation. I just said that it is OK to USE them.
It depends on your portability requirements. If you need to
support machines with ints which might be less than it's not
OK. If you need an integral type which contains values
larger than INT_MAX, then you're in a special case where you
need to use something other than int (but almost certainly
not int32_t).

If I use in32_t I will get a compilation error for those
types when I compile in a system that doesn't
have a 32 bit integer.

That is WAY better than getting incorrect results and debugging
for hours!
 
J

Jorgen Grahn

Possibly, but from the context of that strong suggestions I clearly
understand, I believe I do :p, I should get rid of any build-in
variables for its "awesome" counterpart.

Doesn't surprise me. I see a lot of code like that, almost always with
home-made typedefs instead of the standard (C) types.

When I had my Amiga back in the days, I always used such typedefs
(WORD/UWORD for 16-bit, LONG and ULONG for 32-bit integers) because
the AmigaDos APIs did, and it seemed safer somehow. Then I ran into
problems every time I wanted to port my code to Unix, so I stopped
doing it.

If I recall correcly, Microsoft APIs still use such types a lot. That
might be the reason people are still fixated on it.

/Jorgen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top