sizeof C integral types

A

ark

Risking to invoke flames from one Tom St Denis of Ottawa :)

Is there any guarantee that, say,
sizeof(int) == sizeof(unsigned int)
sizeof(long) > sizeof(char) ?

Thanks,
Ark
 
R

Russell Hanneken

ark said:
Is there any guarantee that, say,
sizeof(int) == sizeof(unsigned int)
sizeof(long) > sizeof(char) ?

I don't believe the standard guarantees that either is true.
 
H

Hallvard B Furuseth

ark said:
Is there any guarantee that, say,
sizeof(int) == sizeof(unsigned int)
Yes.

sizeof(long) > sizeof(char) ?

No. BTW, sizeof(char) == 1 by definition.

However, sizeof(long) == 1, which I think implies sizeof(int) == 1,
would break several very common idioms, e.g.

int ch;
while ((ch = getchar()) != EOF) { ... }

because EOF is supposed to be a value which is different from all
'unsigned char' values. That is only possible when 'int' is wider
than 'unsigned char'.

Personally I've never seen a program which worred about this
possibility, though I suppose such programs exist. It might
be different with freestanding implementations (implementations
which do not use the C library, so getchar() is no problem).
 
E

E. Robert Tisdale

ark said:
Risking to invoke flames from one Tom St Denis of Ottawa :)

Is there any guarantee that, say,
sizeof(int) == sizeof(unsigned int)
sizeof(long) > sizeof(char)?

Are you aware of the type definitions in <stdint.h>?
 
B

Ben Pfaff

Hallvard B Furuseth said:
However, sizeof(long) == 1, which I think implies sizeof(int) == 1,

Actually there's no such implication. The range of int is a
subrange of the range of long, but there's no such guarantee on
the size in bytes of these types.

However, it would be a strange system for which sizeof(long) <
sizeof(int). I don't know of any.
 
A

ark

Hallvard B Furuseth said:
However, sizeof(long) == 1, which I think implies sizeof(int) == 1,
would break several very common idioms, e.g.

int ch;
while ((ch = getchar()) != EOF) { ... }

because EOF is supposed to be a value which is different from all
'unsigned char' values. That is only possible when 'int' is wider
than 'unsigned char'.

Personally I've never seen a program which worred about this
possibility, though I suppose such programs exist. It might
be different with freestanding implementations (implementations
which do not use the C library, so getchar() is no problem).

I believe that a 16-bit DSP with a 16-bit byte would have
sizeof(int)==sizeof(short) (and ==1).
- Ark
 
C

CBFalconer

E. Robert Tisdale said:
Are you aware of the type definitions in <stdint.h>?

Only implied misinformation from Trollsdale this time. <stdint.h>
is a C99 artifact, and the type defined therein are only defined
when the implementation has suitable types. So you could do
something like:

#if defined(sometype)
#define mytype sometype
#else
#define mytype whatever
#endif

with suitable guards for a C99 system.
 
C

Christian Bau

"E. Robert Tisdale said:
Are you aware of the type definitions in <stdint.h>?

Quite possibly he is aware of them and knows that they have nothing to
do with the question asked.
 
J

Jack Klein

Only implied misinformation from Trollsdale this time. <stdint.h>
is a C99 artifact, and the type defined therein are only defined
when the implementation has suitable types. So you could do
something like:

#if defined(sometype)
#define mytype sometype
#else
#define mytype whatever
#endif

with suitable guards for a C99 system.

It is actually quite possible, and very useful, to build a subset of
<stdint.h> for any compiler. Interestingly enough, the (complete, not
subset) <stdint.h> that comes with ARM's ADS compiles and works
perfectly with Visual C++ 6.
 
C

CBFalconer

Jack said:
.... snip ...

It is actually quite possible, and very useful, to build a subset
of <stdint.h> for any compiler. Interestingly enough, the
(complete, not subset) <stdint.h> that comes with ARM's ADS
compiles and works perfectly with Visual C++ 6.

Actually I would expect that to be possible with any system where
things are built on 1, 2, 4, etc. octet sized objects. I think
the availability is customized by simply omitting the appropriate
definitions from stdint.h
 
E

E. Robert Tisdale

ark said:
I believe that a 16-bit DSP with a 16-bit byte

What do you mean by byte?
Did you mean to say "machine word" or "data path"?
Or did you really mean 16-bit characters?
would have sizeof(int)==sizeof(short) (and ==1).

Take a look at
The Vector, Signal and Image Processing Library (VSIPL):

http://www.vsipl.org/

It defines types that are supposed to be portable
to a wide variety of DSP target platforms.
 
D

Dan Pop

In said:
Risking to invoke flames from one Tom St Denis of Ottawa :)

Is there any guarantee that, say,
sizeof(int) == sizeof(unsigned int)

Explicitly guaranteed.
sizeof(long) > sizeof(char) ?

Implicitly guaranteed for hosted implementations, because the library
specification relies on INT_MAX >= UCHAR_MAX and this would be impossible
if sizeof(int) == 1. Since LONG_MAX cannot be lower than INT_MAX,
sizeof(long) cannot be 1, either, on a hosted implementation.

Freestanding implementations with sizeof(long) == 1 do exist.

Dan
 
P

pete

E. Robert Tisdale said:
What do you mean by byte?
Did you mean to say "machine word" or "data path"?
Or did you really mean 16-bit characters?

Since sizeof(int) is equaling 1,
he obviously meant exactly what he said.

Whatever else "byte" may mean in general programming,
"byte" has a specific definition in C,
and that is the way that that word is used on this newsgroup.
 
P

pete

Dan said:
Implicitly guaranteed for hosted implementations, because the library
specification relies on INT_MAX >= UCHAR_MAX and this would be
impossible if sizeof(int) == 1.

I don't recall that ever being stated so plainly
on this newsgroup before.
 
A

Alex

Dan Pop said:
sizeof(long) > sizeof(char) ?

Implicitly guaranteed for hosted implementations, because the
library specification relies on INT_MAX >= UCHAR_MAX [...]

By which I presume you mean an 'int' must be able to hold all possible
values of an 'unsigned char', required in (for example) getchar()?

Alex
 
A

Arthur J. O'Dwyer

I don't recall that ever being stated so plainly
on this newsgroup before.

Nor do I. And even though I at first thought it was technically
wrong because of padding bits, I now think that while it still may be
wrong, it's less wrong than I thought.

a) Plain char is unsigned. INT_MAX must be at least UCHAR_MAX so that
getchar() can return any plain char value, and INT_MIN must be less than
or equal to -32767. So the total number of values of 'int' must be at
least UCHAR_MAX+32768, which requires more bits than CHAR_BIT. Q.E.D.

b) Plain char is signed. The range of char, i.e., of signed char, must
be a subrange of the range of int. But is it possible we might have

#define CHAR_BIT 16
#define UCHAR_MAX 65535
#define SCHAR_MIN -32767 /* !!! */
#define SCHAR_MAX 32767
#define INT_MIN -32768
#define INT_MAX 32767
#define EOF -32768

Is anything wrong, from the C standpoint, with these definitions?

-Arthur
 
N

nrk

Arthur said:
Nor do I. And even though I at first thought it was technically
wrong because of padding bits, I now think that while it still may be
wrong, it's less wrong than I thought.

a) Plain char is unsigned. INT_MAX must be at least UCHAR_MAX so that
getchar() can return any plain char value, and INT_MIN must be less than
or equal to -32767. So the total number of values of 'int' must be at
least UCHAR_MAX+32768, which requires more bits than CHAR_BIT. Q.E.D.

b) Plain char is signed. The range of char, i.e., of signed char, must
be a subrange of the range of int. But is it possible we might have

#define CHAR_BIT 16
#define UCHAR_MAX 65535
#define SCHAR_MIN -32767 /* !!! */
#define SCHAR_MAX 32767
#define INT_MIN -32768
#define INT_MAX 32767
#define EOF -32768

Is anything wrong, from the C standpoint, with these definitions?

Yes, something is wrong. If CHAR_BIT is 16, SCHAR_MIN *has* to be -32768.
This follows from the specification that states that value bits in signed
types have the same meaning as corresponding value bits in the unsigned
types and the stipulation that an unsigned integer type with n bits must be
able to represent values in the range [0, 2^(n-1)].

-nrk.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top