C99 integer types

J

justinx

Hi all,

I have a question that I have been unable to answer. It is bugging me. It is specifically related to the C99 integer types.

My question has been posted on stackoverflow. But I have received only one answer that was of no help.

http://stackoverflow.com/questions/11381764/arm-cortex-m3-uint-fast32-t-vs-uint32-t
http://stackoverflow.com/questions/11518212/c99s-fixed-width-integer-types

To be very specific. I am working with an STM32F103RCT6 Cortex-M3. The C compiler I am using is Code Sourcery G++ Lite 2011.03-42 (4.5.2).

In <stdint.h> the follow types are defined:

typedef unsigned int uint_fast32_t;
typedef uint32_t uint_least32_t;
typedef unsigned long uint32_t;

-. On this platform sizeof(int) == sizeof(long). Why are the new types not all unsigned ints or all unsigned longs?

-. What I find even more interesting is the underlying type of uint32_least_t (unsigned long) is larger (or at least equal) than uint32_fast_t (unsigned int). This does not seem logical to me. Surely the least width integer type would use the smallest basic type possible.

Any insight in to the selection process for establishing the underlying data types for the fixed, minimum and fastest width types would be great.

This leads to one other questions. If sizeof(int) == sizeof(long), is there ANY difference (performance or otherwise) in using one over the other?

Thanks

Justin
 
K

Keith Thompson

justinx said:
I have a question that I have been unable to answer. It is bugging
me. It is specifically related to the C99 integer types.

My question has been posted on stackoverflow. But I have received only
one answer that was of no help.

http://stackoverflow.com/questions/11381764/arm-cortex-m3-uint-fast32-t-vs-uint32-t
http://stackoverflow.com/questions/11518212/c99s-fixed-width-integer-types

Look again; I've just posted answers to both questions.
To be very specific. I am working with an STM32F103RCT6 Cortex-M3. The
C compiler I am using is Code Sourcery G++ Lite 2011.03-42 (4.5.2).

In <stdint.h> the follow types are defined:

typedef unsigned int uint_fast32_t;
typedef uint32_t uint_least32_t;
typedef unsigned long uint32_t;

-. On this platform sizeof(int) == sizeof(long). Why are the new types
not all unsigned ints or all unsigned longs?

It's an arbitrary choice. I suspect either an arbitrary choice by the
author, or that two or more developers with different ideas worked on
that version of <stdint.h>. As long as the types chosen meet the
requirements of the standard (as I presume they do), there shouldn't be
any real problem.

Code that makes non-portable assumptions about which predefined type(s)
uint32_t et al are compatible with could break, but writing such code is
a bad idea anyway.
-. What I find even more interesting is the underlying type of
uint32_least_t (unsigned long) is larger (or at least equal) than
uint32_fast_t (unsigned int). This does not seem logical to me. Surely
the least width integer type would use the smallest basic type
possible.

You said that unsigned int and unsigned long are the same size, so no,
uint32_least_t is *not* larger than uint32_fast_t. It's exactly the
same size.
Any insight in to the selection process for establishing the
underlying data types for the fixed, minimum and fastest width types
would be great.

Only the authors of that implementation can give you a definitive answer
to that. My answer is that it doesn't matter.
This leads to one other questions. If sizeof(int) == sizeof(long), is
there ANY difference (performance or otherwise) in using one over the
other?

The C standard doesn't address that question, but given that they're the
same size I can't think of any reason there should be any performance
difference. You should get exactly the same machine code.
 
J

James Kuyper

Hi all,

I have a question that I have been unable to answer. It is bugging me. It is specifically related to the C99 integer types.

My question has been posted on stackoverflow. But I have received only one answer that was of no help.

http://stackoverflow.com/questions/11381764/arm-cortex-m3-uint-fast32-t-vs-uint32-t
http://stackoverflow.com/questions/11518212/c99s-fixed-width-integer-types

To be very specific. I am working with an STM32F103RCT6 Cortex-M3. The C compiler I am using is Code Sourcery G++ Lite 2011.03-42 (4.5.2).

In <stdint.h> the follow types are defined:

typedef unsigned int uint_fast32_t;
typedef uint32_t uint_least32_t;
typedef unsigned long uint32_t;

-. On this platform sizeof(int) == sizeof(long). Why are the new types not all unsigned ints or all unsigned longs?

You'll have to ask the implementor. Those choices don't appear to
violate any obligation imposed by the standard.
-. What I find even more interesting is the underlying type of uint32_least_t (unsigned long) is larger (or at least equal) than uint32_fast_t (unsigned int). This does not seem logical to me. Surely the least width integer type would use the smallest basic type possible.

Since unsigned long qualifies as uint32_t, it must have exactly 32 value
bits, and no padding bits. Since uint_least32_t must have at least 32
value bits, it's not possible for it to be a typedef of any type smaller
than unsigned long on this platform. You've said that int and long have
the same size, so 'unsigned int' would not be an example of a smaller
type that could be used.
Any insight in to the selection process for establishing the underlying data types for the fixed, minimum and fastest width types would be great.

Each fixed-size type must have exactly the specified number of bits, 2's
complement representation (if signed), and no padding bits. The least-
and fast-sized types must have at least the specified width. The
least-sized types must be the smallest type with at least that width.
The fast-sized types should be the fastest type with at least that
width, but "fast" is not well-defined in this context, so that's not an
enforceable part of the requirements, and the standard explicitly
endorses selecting a type arbitrarily if an implementation cannot find
any better reason for designating a type as 'fast'.
Signed and unsigned types must come in corresponding pairs that have the
same storage and alignment requirements.

Except for those restrictions, an implementation is free to choose the
size-named types any way it wants - including using 'long' for some
32-bit types, and 'int' for others.
This leads to one other questions. If sizeof(int) == sizeof(long), is there ANY difference (performance or otherwise) in using one over the other?

Yes - one could be 2's complement, the other could be 1's complement.
One could have a different number of padding bits than the other. One
could have stricter alignment requirements than the other. They could
have different endianess. All of those things could affect the
performance of code using those types.

However, while that would not render the implementation non-conforming,
I can't think of any reasons why an implementation would do any of those
things. An implementation that supported a mixture of types with
different endianness would probably use the same endianess for all
standard types, and provide extended types with the other endianess. The
same is true of the other characteristics I mentioned above.
 
E

Eric Sosman

[... reformatted for legibility ...]
In <stdint.h> the follow types are defined:

typedef unsigned int uint_fast32_t;
typedef uint32_t uint_least32_t;
typedef unsigned long uint32_t;

-. On this platform sizeof(int) == sizeof(long). Why are the new types
not all unsigned ints or all unsigned longs?

You'll have to ask the implementors. One possibility (and it's
only a possibility) is that the same header is used both on your
platform and on platforms with other characteristics.
-. What I find even more interesting is the underlying type of
uint32_least_t (unsigned long) is larger (or at least equal) than
uint32_fast_t (unsigned int). This does not seem logical to me.

Didn't you just tell us that int and long were the same size
(on your platform)? If so, why worry that either is "larger" than
the other, given that they're the same?
Surely the least width integer type would use the smallest basic
type possible.

It must use the smallest type possible, regardless of whether
that type is basic or extended. If there are multiple suitable
candidates of equal size, the Standard does not dictate which shall
be used.
Any insight in to the selection process for establishing the
underlying data types for the fixed, minimum and fastest width
types would be great.

If by "fixed" you mean "exact-width" (7.20.1.1), the selection
is straightforward: The implementation declares a type of exactly
the specified width (if it has one) that uses two's complement (for
signed exact-width types). If there's more than one such type,
the implementation can use any of them. The choices for uintN_t
and intN_t are independent: One might be `unsigned long' while the
other is `__builtin_twos_complement_32'.

It's much the same for "minimum-width" types (7.20.1.2). The
chosen type must satisfy two constraints: First, it must be at least
as wide as specified (possibly wider), and second, it must be the
narrowest such type. Again, if there's more than one suitable type
the Standard does not dictate a choice. (Note that two's complement
is not required.)

The "fastest" types (7.20.1.3) are dicier, since "fastest" is
defined only by hand-waving. The chosen type must be of at least
the specified width, but there's no other enforceable constraint
since "fastest" is undefined and hence unenforceable. The general
idea is that on some machines the manipulation of narrow quantities
might be expensive, involving shifting and masking and stuff, and
if so the implementor might use 64 bits, say, for uint_fast16_t to
allow the use of "full-word" instructions. But, since "fastest" is
open to interpretation, the implementor's choice is not really
impeachable.
This leads to one other questions. If sizeof(int) == sizeof(long),
is there ANY difference (performance or otherwise) in using one over
the other?

How long is a piece of string?

In other words, it depends on the platform. The fact that two
types have the same sizeof does not imply that they have the same
performance characteristics. (For an obvious counterexample, observe
that sizeof(float) == sizeof(int) on many systems.) I have not, myself,
encountered a system where sizeof(int) == sizeof(long) *and* the two
types used different underlying machine representations, but there are
lots of C's I have never sailed. Such a system could exist and support
a conforming C implementation -- for example, a system might use its
native ones' complement representation for `int' while going to extra
work to simulate two's complement for a `long' of the same width.
 
J

Jorgen Grahn

It's an arbitrary choice. I suspect either an arbitrary choice by the
author, or that two or more developers with different ideas worked on
that version of <stdint.h>. As long as the types chosen meet the
requirements of the standard (as I presume they do), there shouldn't be
any real problem.

One real possibility is that the author said "let's make these names
as incompatible as possible, to help the programmers write portable
code".

/Jorgen
 
B

Barry Schwarz

Hi all,

I have a question that I have been unable to answer. It is bugging me. Itis specifically related to the C99 integer types.
To be very specific. I am working with an STM32F103RCT6 Cortex-M3. The C compiler I am using is Code Sourcery G++ Lite 2011.03-42 (4.5.2).

In <stdint.h> the follow types are defined:

typedef unsigned int uint_fast32_t;
typedef uint32_t uint_least32_t;
typedef unsigned long uint32_t;

-. On this platform sizeof(int) == sizeof(long). Why are the new types not all unsigned ints or all unsigned longs?

-. What I find even more interesting is the underlying type of uint32_least_t (unsigned long) is larger (or at least equal) than uint32_fast_t (unsigned int). This does not seem logical to me. Surely the least width integertype would use the smallest basic type possible.

Any insight in to the selection process for establishing the underlying data types for the fixed, minimum and fastest width types would be great.

This leads to one other questions. If sizeof(int) == sizeof(long), isthere ANY difference (performance or otherwise) in using one over the other?

While the contributors to this group may not be able to infer why the designers chose what they did does not imply the absence of a rationale. For example, while they are the same size, unsigned int and unsigned long have different conversion ranks. This may make a difference in the generated codeand may have driven the compiler writers to choose one over the other.

Have you tried to contact the tech support people with your query?
 
J

justinx

While the contributors to this group may not be able to infer why the designers chose what they did does not imply the absence of a rationale. For example, while they are the same size, unsigned int and unsigned long have different conversion ranks. This may make a difference in the generated code and may have driven the compiler writers to choose one over the other.



Have you tried to contact the tech support people with your query?

Conversion ranks. I had not thought of it from that perspective. If I get some spare time at work I will do some tests using an algorithm in my code. It currently exclusively uses uint_fast32_t's. I will simply compare the generated code using uint32_t, uint_least32_t and uint_fast32_t.

I have emailed my question to Mentor Graphics who acquired CodeSourcery. I probably wont get a response since I am using the lite edition.
 
J

justinx

On Sun, 2012-07-29, Keith Thompson wrote:


...








One real possibility is that the author said "let's make these names

as incompatible as possible, to help the programmers write portable

code".



/Jorgen



--

// Jorgen Grahn <grahn@ Oo o. . .

\X/ snipabacken.se> O o .

This is an interesting idea. A quick test shows that assigning a variable
of type (uint32_t) unsigned long to a variable of type uint_fast32_t (unsigned int)
generates no warning. I presume this is because in this case int and long are
the same size.

volatile uint_fast32_t a;
volatile uint32_t b;

a = UINT32_C(65536);
b = UINT32_C(65536);
b = b + a;
a = b;

Warnings enabled were enables as follows:
-Wall -Wextra -pedantic
-Wdouble-promotion -Wformat=2 -Winit-self -Wmissing-include-dirs -Wswitch-default
-Wswitch-enum -Wsync-nand -Wunused-parameter -Wunused-result -Wunused
-Wuninitialized -Wstrict-overflow=5 -Wmissing-format-attribute -Wunknown-pragmas
-Wfloat-equal -Wundef -Wshadow -Wlarger-than=6144 -Wframe-larger-than=40
-Wunsafe-loop-optimizations -Wbad-function-cast -Wcast-qual -Wcast-align
-Wwrite-strings -Wconversion -Wjump-misses-init -Wlogical-op -Waggregate-return
-Wstrict-prototypes -Wold-style-definition -Wmissing-prototypes -Wmissing-declarations
-Wpacked -Wredundant-decls -Winline -Winvalid-pch -Wvariadic-macros -Wvla
-Wdisabled-optimization -Wstack-protector
 
J

justinx

One real possibility is that the author said "let's make these names

as incompatible as possible, to help the programmers write portable

code".



/Jorgen



--

// Jorgen Grahn <grahn@ Oo o. . .

\X/ snipabacken.se> O o .

This is an interesting idea. A quick test shows that assigning a variable
of type (uint32_t) unsigned long to a variable of type uint_fast32_t (unsigned int)
generates no warning. I presume this is because in this case int and long are
the same size.

volatile uint_fast32_t a;
volatile uint32_t b;

a = UINT32_C(65536);
b = UINT32_C(65536);
b = b + a;
a = b;

Warnings enabled were enables as follows:
-Wall -Wextra -pedantic
-Wdouble-promotion -Wformat=2 -Winit-self -Wmissing-include-dirs -Wswitch-default
-Wswitch-enum -Wsync-nand -Wunused-parameter -Wunused-result -Wunused
-Wuninitialized -Wstrict-overflow=5 -Wmissing-format-attribute -Wunknown-pragmas
-Wfloat-equal -Wundef -Wshadow -Wlarger-than=6144 -Wframe-larger-than=40
-Wunsafe-loop-optimizations -Wbad-function-cast -Wcast-qual -Wcast-align
-Wwrite-strings -Wconversion -Wjump-misses-init -Wlogical-op -Waggregate-return
-Wstrict-prototypes -Wold-style-definition -Wmissing-prototypes -Wmissing-declarations
-Wpacked -Wredundant-decls -Winline -Winvalid-pch -Wvariadic-macros -Wvla
-Wdisabled-optimization -Wstack-protector
 
B

Ben Bacarisse

This is an interesting idea. A quick test shows that assigning a
variable of type (uint32_t) unsigned long to a variable of type
uint_fast32_t (unsigned int) generates no warning. I presume this is
because in this case int and long are the same size.

A better test for compatibility is to assign pointers. It's still not
an fool proof (the assigned-to pointer can have more qualifiers than the
pointer being assigned) but it does not rely on warnings. It's a
constraint violation, so a diagnostic is required.

<snip>
 
R

Ronald Landheer-Cieslak

Barry Schwarz said:
While the contributors to this group may not be able to infer why the
designers chose what they did does not imply the absence of a rationale.
For example, while they are the same size, unsigned int and unsigned long
have different conversion ranks. This may make a difference in the
generated code and may have driven the compiler writers to choose one over the other.

Excuse my ignorance, but what's a "conversion rank"?

Thx

rlc
 
E

Eric Sosman

Barry Schwarz said:
[...]
While the contributors to this group may not be able to infer why the
designers chose what they did does not imply the absence of a rationale.
For example, while they are the same size, unsigned int and unsigned long
have different conversion ranks. This may make a difference in the
generated code and may have driven the compiler writers to choose one over the other.

Excuse my ignorance, but what's a "conversion rank"?

Shorthand for "integer conversion rank," of course. :)

C sometimes needs to convert values from one type to another
before working with them. For example, you cannot compare an
`unsigned short' and a `signed int' as they stand; you must first
convert them to a common type and then compare the converted values.
But what type should be chosen? Plausible arguments could be made
for any of `unsigned short' or `signed int' or `unsigned int' or
even `unsigned long', depending on the relative "sizes" of these
types on the machine at hand.

"Integer conversion rank" formalizes this notion of "size."
In the old days there were only a few integer types and it was
easy to enumerate the possible combinations. Things got more
complicated when C99 not only introduced new integer types, but
made the set open-ended: An implementation might support types
like `int24_t' or `uint_least36_t', and we need to know where
these fit with respect to each other and to generic types like
`long'. For example, when you divide a `uint_least36_t' by a
`long', what conversions occur? Inquiring masochists want to know.

To this end, each integer type has an "integer conversion rank"
that establishes a pecking order. Roughly speaking, "narrow" types
have low ranks and "wide" types have higher ranks. It's all in
section 6.3.1.1 of the Standard, which takes quite a bit of prose
to express this "narrow versus wide" idea precisely -- but that's
really all it's doing: narrow versus wide, and how to handle ties.

Eventually, when C needs to perform "integer promotions" or
"usual arithmetic conversions," its choice of target type for
integers is driven by the integer conversion rank(s) of the original
type(s) involved.
 
R

Ronald Landheer-Cieslak

Eric Sosman said:
Barry Schwarz said:
[...]
While the contributors to this group may not be able to infer why the
designers chose what they did does not imply the absence of a rationale.
For example, while they are the same size, unsigned int and unsigned long
have different conversion ranks. This may make a difference in the
generated code and may have driven the compiler writers to choose one over the other.

Excuse my ignorance, but what's a "conversion rank"?

Shorthand for "integer conversion rank," of course. :)

C sometimes needs to convert values from one type to another
before working with them. For example, you cannot compare an
`unsigned short' and a `signed int' as they stand; you must first
convert them to a common type and then compare the converted values.
But what type should be chosen? Plausible arguments could be made
for any of `unsigned short' or `signed int' or `unsigned int' or
even `unsigned long', depending on the relative "sizes" of these
types on the machine at hand.

"Integer conversion rank" formalizes this notion of "size."
In the old days there were only a few integer types and it was
easy to enumerate the possible combinations. Things got more
complicated when C99 not only introduced new integer types, but
made the set open-ended: An implementation might support types
like `int24_t' or `uint_least36_t', and we need to know where
these fit with respect to each other and to generic types like
`long'. For example, when you divide a `uint_least36_t' by a
`long', what conversions occur? Inquiring masochists want to know.

To this end, each integer type has an "integer conversion rank"
that establishes a pecking order. Roughly speaking, "narrow" types
have low ranks and "wide" types have higher ranks. It's all in
section 6.3.1.1 of the Standard, which takes quite a bit of prose
to express this "narrow versus wide" idea precisely -- but that's
really all it's doing: narrow versus wide, and how to handle ties.

Eventually, when C needs to perform "integer promotions" or
"usual arithmetic conversions," its choice of target type for
integers is driven by the integer conversion rank(s) of the original
type(s) involved.

OK, so it basically formalizes the conversions that the integer types goes
through to end up with either something useful or something
implementation-defined, or both.

Reading the draft Barry pointed to, it doesn't seem to actually change any
of the rules as they were before - just formalize them, is that right? (and
comparing a negative signed short to an unsigned long still yields
implementation-defined results).

Thanks,

rlc
 
J

James Kuyper

OK, so it basically formalizes the conversions that the integer types goes
through to end up with either something useful or something
implementation-defined, or both.

Reading the draft Barry pointed to, it doesn't seem to actually change any
of the rules as they were before - just formalize them, is that right?

I believe that integer conversion rank was put into the very first
version of the C standard, nearly a quarter century ago. Before that
time, different compilers implemented different rules. So while it would
be accurate to say that the rules were formalized, it would be
inaccurate to say that there was no change: some compilers implemented
rules that differed from the formalized version of the rules. I'm not
sure whether any of them implemented exactly the rules that were
formalized, though I think it's likely that some did.

(and
comparing a negative signed short to an unsigned long still yields
implementation-defined results).

In such a comparison, the negative signed short value is first promoted
to an 'int', without change in value. Then that value is converted to
unsigned long. That conversion is well-defined: it is performed by
adding ULONG_MAX+1 to the negative value, with a result that is
necessarily representable as unsigned long. That result is then compared
with the other unsigned long value.

Since the value of ULONG_MAX is implementation-defined, the result could
be described as implementation-defined, but once the value for ULONG_MAX
has been defined by the implementation, the standard gives the
implementation no additional freedom when performing that comparison.

Does that correspond with what you meant?
 
E

Eric Sosman

Eric Sosman said:
[...]
While the contributors to this group may not be able to infer why the
designers chose what they did does not imply the absence of a rationale.
For example, while they are the same size, unsigned int and unsigned long
have different conversion ranks. This may make a difference in the
generated code and may have driven the compiler writers to choose one over the other.

Excuse my ignorance, but what's a "conversion rank"?

Shorthand for "integer conversion rank," of course. :)

C sometimes needs to convert values from one type to another
before working with them. For example, you cannot compare an
`unsigned short' and a `signed int' as they stand; you must first
convert them to a common type and then compare the converted values.
But what type should be chosen? Plausible arguments could be made
for any of `unsigned short' or `signed int' or `unsigned int' or
even `unsigned long', depending on the relative "sizes" of these
types on the machine at hand.

"Integer conversion rank" formalizes this notion of "size."
In the old days there were only a few integer types and it was
easy to enumerate the possible combinations. Things got more
complicated when C99 not only introduced new integer types, but
made the set open-ended: An implementation might support types
like `int24_t' or `uint_least36_t', and we need to know where
these fit with respect to each other and to generic types like
`long'. For example, when you divide a `uint_least36_t' by a
`long', what conversions occur? Inquiring masochists want to know.

To this end, each integer type has an "integer conversion rank"
that establishes a pecking order. Roughly speaking, "narrow" types
have low ranks and "wide" types have higher ranks. It's all in
section 6.3.1.1 of the Standard, which takes quite a bit of prose
to express this "narrow versus wide" idea precisely -- but that's
really all it's doing: narrow versus wide, and how to handle ties.

Eventually, when C needs to perform "integer promotions" or
"usual arithmetic conversions," its choice of target type for
integers is driven by the integer conversion rank(s) of the original
type(s) involved.

OK, so it basically formalizes the conversions that the integer types goes
through to end up with either something useful or something
implementation-defined, or both.

Reading the draft Barry pointed to, it doesn't seem to actually change any
of the rules as they were before - just formalize them, is that right?

Pretty much, yes: It's formalized to make it work with
implementation-defined integer types the Standard doesn't know
about, or doesn't require, or doesn't fully specify.
(and
comparing a negative signed short to an unsigned long still yields
implementation-defined results).

Within limits, yes. Let's work through it:

- First, we consult 6.5.8 for the relational operators, and
learn that the "usual arithmetic conversions" apply to
both operands.

- Over to 6.3.1.8 for the UAC's, where we learn that the
"integer promotions" are performed on each operand,
independently.

- 6.3.1.1 describes the IP's. We find that `unsigned long'
is unaffected. It takes a little more research, but we
eventually find that `signed short' converts to `int'.

- 6.3.1.3 tells us that this conversion preserves the
original value, so we now have the `int' whose value
is the same as that of the original `signed short'.

- Back to 6.3.1.8 again to continue with the UAC's, now with
an `unsigned long' and an `int' and working through the
second level of "otherwise." There we find that we've got
one signed and one unsigned operand, *and* the unsigned
operand has the higher rank (consult 6.3.1.1 again). This
tells us we must convert the `int' again, this time to
`unsigned long'.

- Over to 6.3.1.3 again for the details of the conversion,
and if the `int' is negative we must use "the maximum
value that can be represented in the new type" to finish
converting. ULONG_MAX is an implementation-defined value,
so this is where implementation-definedness creeps in.

- ... and we're back to 6.5.8, with two `unsigned long' values,
which we know how to compare.

Seems like quite a lot of running around for a "simple" matter,
but consider: Before the first ANSI Standard nailed things down,
different C implementations disagreed on how some comparisons should
be done! Both the "unsigned preserving" and "value preserving" camps
(see the Rationale) would have agreed on the particular example we've
just worked through, but would have produced different results for
some other comparisons. The Standard's complicated formalisms --
including "integer conversion rank" -- are part of an attempt to
eliminate or at least minimize such disagreements.
 
R

Ronald Landheer-Cieslak

James Kuyper said:
I believe that integer conversion rank was put into the very first
version of the C standard, nearly a quarter century ago. Before that
time, different compilers implemented different rules. So while it would
be accurate to say that the rules were formalized, it would be
inaccurate to say that there was no change: some compilers implemented
rules that differed from the formalized version of the rules. I'm not
sure whether any of them implemented exactly the rules that were
formalized, though I think it's likely that some did.

(and

In such a comparison, the negative signed short value is first promoted
to an 'int', without change in value. Then that value is converted to
unsigned long. That conversion is well-defined: it is performed by
adding ULONG_MAX+1 to the negative value, with a result that is
necessarily representable as unsigned long. That result is then compared
with the other unsigned long value.

Since the value of ULONG_MAX is implementation-defined, the result could
be described as implementation-defined, but once the value for ULONG_MAX
has been defined by the implementation, the standard gives the
implementation no additional freedom when performing that comparison.

Does that correspond with what you meant?
Yes. That and the fact that due to the addition in 6.3.1.3p2 the result
differs on systems depending how signed integers are implemented (i.e. it
works as expected only for two's complement signed integers).

Thanks,

rlc
 
R

Ronald Landheer-Cieslak

Eric Sosman said:
Eric Sosman said:
On 7/30/2012 9:53 AM, Ronald Landheer-Cieslak wrote:
[...]
While the contributors to this group may not be able to infer why the
designers chose what they did does not imply the absence of a rationale.
For example, while they are the same size, unsigned int and unsigned long
have different conversion ranks. This may make a difference in the
generated code and may have driven the compiler writers to choose one over the other.

Excuse my ignorance, but what's a "conversion rank"?

Shorthand for "integer conversion rank," of course. :)

C sometimes needs to convert values from one type to another
before working with them. For example, you cannot compare an
`unsigned short' and a `signed int' as they stand; you must first
convert them to a common type and then compare the converted values.
But what type should be chosen? Plausible arguments could be made
for any of `unsigned short' or `signed int' or `unsigned int' or
even `unsigned long', depending on the relative "sizes" of these
types on the machine at hand.

"Integer conversion rank" formalizes this notion of "size."
In the old days there were only a few integer types and it was
easy to enumerate the possible combinations. Things got more
complicated when C99 not only introduced new integer types, but
made the set open-ended: An implementation might support types
like `int24_t' or `uint_least36_t', and we need to know where
these fit with respect to each other and to generic types like
`long'. For example, when you divide a `uint_least36_t' by a
`long', what conversions occur? Inquiring masochists want to know.

To this end, each integer type has an "integer conversion rank"
that establishes a pecking order. Roughly speaking, "narrow" types
have low ranks and "wide" types have higher ranks. It's all in
section 6.3.1.1 of the Standard, which takes quite a bit of prose
to express this "narrow versus wide" idea precisely -- but that's
really all it's doing: narrow versus wide, and how to handle ties.

Eventually, when C needs to perform "integer promotions" or
"usual arithmetic conversions," its choice of target type for
integers is driven by the integer conversion rank(s) of the original
type(s) involved.

OK, so it basically formalizes the conversions that the integer types goes
through to end up with either something useful or something
implementation-defined, or both.

Reading the draft Barry pointed to, it doesn't seem to actually change any
of the rules as they were before - just formalize them, is that right?

Pretty much, yes: It's formalized to make it work with
implementation-defined integer types the Standard doesn't know
about, or doesn't require, or doesn't fully specify.
(and
comparing a negative signed short to an unsigned long still yields
implementation-defined results).

Within limits, yes. Let's work through it:

- First, we consult 6.5.8 for the relational operators, and
learn that the "usual arithmetic conversions" apply to
both operands.

- Over to 6.3.1.8 for the UAC's, where we learn that the
"integer promotions" are performed on each operand,
independently.

- 6.3.1.1 describes the IP's. We find that `unsigned long'
is unaffected. It takes a little more research, but we
eventually find that `signed short' converts to `int'.

- 6.3.1.3 tells us that this conversion preserves the
original value, so we now have the `int' whose value
is the same as that of the original `signed short'.

- Back to 6.3.1.8 again to continue with the UAC's, now with
an `unsigned long' and an `int' and working through the
second level of "otherwise." There we find that we've got
one signed and one unsigned operand, *and* the unsigned
operand has the higher rank (consult 6.3.1.1 again). This
tells us we must convert the `int' again, this time to
`unsigned long'.

- Over to 6.3.1.3 again for the details of the conversion,
and if the `int' is negative we must use "the maximum
value that can be represented in the new type" to finish
converting. ULONG_MAX is an implementation-defined value,
so this is where implementation-definedness creeps in.

- ... and we're back to 6.5.8, with two `unsigned long' values,
which we know how to compare.
A very thorough walk-through of the conversions indeed, thanks.
Seems like quite a lot of running around for a "simple" matter,
but consider: Before the first ANSI Standard nailed things down,
different C implementations disagreed on how some comparisons should
be done! Both the "unsigned preserving" and "value preserving" camps
(see the Rationale) would have agreed on the particular example we've
just worked through, but would have produced different results for
some other comparisons. The Standard's complicated formalisms --
including "integer conversion rank" -- are part of an attempt to
eliminate or at least minimize such disagreements.
I didn't want to imply that I had any problem with the added complexity
(and don't think I had): I understand very well that there's a real need to
specify in detail how these sorts of conversions need to work.

However, I think it only works as expected if the signed integer type is a
two's complement type. 6.2.6.2s2 allows for three representations, two of
which won't work as expected when ULONG_MAX + 1 is "repeatedly added" as in
6.3.1.3p2.

I've never worked with hardware that had anything other than two's
complement integers, but that is what I meant with the
"implementation-defined" bit.

Thanks,

rlc
 
E

Eric Sosman

James Kuyper said:
[...]
Since the value of ULONG_MAX is implementation-defined, the result could
be described as implementation-defined, but once the value for ULONG_MAX
has been defined by the implementation, the standard gives the
implementation no additional freedom when performing that comparison.

Does that correspond with what you meant?
Yes. That and the fact that due to the addition in 6.3.1.3p2 the result
differs on systems depending how signed integers are implemented (i.e. it
works as expected only for two's complement signed integers).

The conversion rules are independent of representation, and
deal only with values. If you expect something different from
different ways of representing negative integers, you expect
incorrectly.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,901
Latest member
Noble71S45

Latest Threads

Top