Bit-fields and integral promotion

C

CBFalconer

Keith said:
.... snip ...

Interesting. I just tried it with a number of compilers, and they
all got it right (everything promotes to signed).

Now start playing with the defines in this variant on Jacks test:

#include <stdio.h>

struct nu {
unsigned uc: 8;
unsigned us: 16;
};

#define DELTA 32900 /* 5 or 32900 */
#define INIT -1 /* 0 or -1 */

int main(void)
{
unsigned char uc = INIT;
unsigned short us = INIT;
struct nu nu = { INIT, INIT };

if ((uc - DELTA) < 0) puts("unsigned char ==> signed");
else puts("unsigned char ==> unsigned");
if ((nu.uc - DELTA) < 0) puts("unsigned :8 ==> signed");
else puts("unsigned :8 ==> unsigned");
if ((us - DELTA) < 0) puts("unsigned short ==> signed");
else puts("unsigned short ==> unsigned");
if ((nu.us - DELTA) < 0) puts("unsigned :16 ==> signed");
else puts("unsigned :16 ==> unsigned");
return 0;
}
 
C

CBFalconer

I am wondering that in GCC bith cases are "Foo". If I remove bit
fileds in structure and executes, it gives "Bar". Can any one
helps me in explaning why unsigned type is not considering in
presence of bit fileds?.

That's what the argument is about.

Please don't toppost. Your answer belongs after (or intermixed
with) the material to which you reply, with non-germane areas
snipped out.
 
K

Kevin Bracey

In message <[email protected]>
Keith Thompson said:
Interesting. I just tried it with a number of compilers, and they all
got it right (everything promotes to signed).

Would the two of you care to name the compilers? I'd certainly be interested
to see who's getting it right and wrong.

For the record, Norcroft ARM C gives all signed when in ANSI mode, and all
unsigned when in pcc mode.
 
J

James Kuyper

CBFalconer said:
Jack Klein wrote:

... snip ...



For types other than bit-fields. However the bit-field language is
ambiguous,

DR 122 confirmed the meaning that most of us except you find obvious.
... and all I am saying is that the opportunity exists to
clean it up to a sensible meaning, and that that sensible meaning
should be unsigned preserving.

If such an opportunity did exist, the most sensible meaning would be the
one that's consistent with the way unsigned char and unsigned short are
handled. Your arguments for unsigned-preserving promotion of bit-field
apply equally well (or more accurately, equally poorly) to those types.
Unless you can convince the committee to reverse it's decision on those
two types, I don't see any value in doing bit-fields differently. You
may find it surprising and inconvenient, but at least its a surprising
and inconvenient thing that's done consistently, which will make it
easier for programmers to remember, once they've gotten over their
initial surprise.
 
C

CBFalconer

Kevin said:
.... snip ...

Example?

In this modification of Jack Kleins example, try changing INIT from
0 to -1 and observe the behaviour.

#include <stdio.h>

struct nu {
unsigned uc: 8;
unsigned us: 16;
};

#define DELTA 5 /* 5 or 32900 */
#define INIT 0 /* 0 or -1 */

int main(void)
{
unsigned char uc = INIT;
unsigned short us = INIT;
struct nu nu = { INIT, INIT };

if ((uc - DELTA) < 0) puts("unsigned char ==> signed");
else puts("unsigned char ==> unsigned");
if ((nu.uc - DELTA) < 0) puts("unsigned :8 ==> signed");
else puts("unsigned :8 ==> unsigned");
if ((us - DELTA) < 0) puts("unsigned short ==> signed");
else puts("unsigned short ==> unsigned");
if ((nu.us - DELTA) < 0) puts("unsigned :16 ==> signed");
else puts("unsigned :16 ==> unsigned");
return 0;
}
 
C

CBFalconer

James said:
DR 122 confirmed the meaning that most of us except you find obvious.


If such an opportunity did exist, the most sensible meaning would be

Since dr122 is dated 1993, obviously the opportunity existed in
creating C99. No such correction appeared. C99 remains
ambiguous. I see no reason to enshrine flawed thinking in
perpetuity.
 
K

Kevin Bracey

In message <[email protected]>
CBFalconer said:
In this modification of Jack Kleins example, try changing INIT from
0 to -1 and observe the behaviour.

#include <stdio.h>

struct nu {
unsigned uc: 8;
unsigned us: 16;
};

#define DELTA 5 /* 5 or 32900 */
#define INIT 0 /* 0 or -1 */

int main(void)
{
unsigned char uc = INIT;
unsigned short us = INIT;
struct nu nu = { INIT, INIT };

if ((uc - DELTA) < 0) puts("unsigned char ==> signed");
else puts("unsigned char ==> unsigned");
if ((nu.uc - DELTA) < 0) puts("unsigned :8 ==> signed");
else puts("unsigned :8 ==> unsigned");
if ((us - DELTA) < 0) puts("unsigned short ==> signed");
else puts("unsigned short ==> unsigned");
if ((nu.us - DELTA) < 0) puts("unsigned :16 ==> signed");
else puts("unsigned :16 ==> unsigned");
return 0;
}

Sorry. Must be being dense here. If INIT is -1, then the uc's and us's
initialise to 0xFF and 0xFFFF, respectively.

If DELTA is 5, then the subtractions result in (int) 0xFA and (int) 0xFFFA.
No overflow or wrapping, and the results are perfectly good ints that could
be reassigned back to uc or us.

If DELTA is 32900 then uc - DELTA = (int) -0x7F85 and us - DELTA = (int)
0x7F7B. Still no wrapping. If -0x7F85 is reassigned back to uc then the
assignment will reduce modulo 0x100 or 0x10000, as per normal unsigned
overflow rules.

Obviously, once INIT and or DELTA is changed the program no longer tests what
its printfs suggest it does. If you had INIT = -1 and DELTA = 65600 then the
program would function correctly again, outputting meaningful text.

What's your point?
 
J

James Kuyper

CBFalconer wrote:
....
Since dr122 is dated 1993, obviously the opportunity existed in
creating C99. No such correction appeared.

Yes, they did not take that opportunity to change the decision that
they'd made. The original wording, whose meaning was confirmed by DR122,
remains in effect.
... C99 remains
ambiguous.

Yes, in many places; but not in this one.
... I see no reason to enshrine flawed thinking in
perpetuity.

Flawed or not, it was a deliberate decision of the committee, not an
oversight. The wording was less than ideal, but clear enough, and the
meaning that seems obvious to most of us was in fact the one intended,
as confirmed by DR 122. You can't get them to "fix" it until you
convince them it was wrong.

I seriously doubt that you have any arguments that weren't presented
when the decision was first made. I wasn't there, so I have no idea what
arguments were presented, but the ones you've made seem to be pretty
obvious. I doubt they were overlooked; I think they were merely judged
to be inadequate. Without a new argument, I don't see how you're likely
to get a different decision.
 
K

Kevin Bracey

In message <[email protected]>
CBFalconer said:
Since dr122 is dated 1993, obviously the opportunity existed in
creating C99. No such correction appeared. C99 remains
ambiguous. I see no reason to enshrine flawed thinking in
perpetuity.

Except you're proposing a change to the standard that will render a large
number of conforming C90/C99 compilers that follow the advice of DR122
non-conforming. And it'll potentially break code, too.

Besides, I don't agree that C99 is ambiguous. DR122 cleared up the ambiguity
for C90, and surely all C90 DR responses of the type "we think it's clear
enough - here's what it means" remain applicable to C99, if C99 still has
same wording.

I still don't really understand why you think bitfields should be treated
differently to other narrow types. I think you'd need a pretty strong
argument to create such an inconsistency.

It's clear that you prefer sign-preserving promotion, but surely consistent
promotions among all types are better than having some sign-preserving and
some value-preserving?
 
K

Keith Thompson

Kevin Bracey said:
In message <[email protected]>


Would the two of you care to name the compilers? I'd certainly be interested
to see who's getting it right and wrong.

For the record, Norcroft ARM C gives all signed when in ANSI mode, and all
unsigned when in pcc mode.

I tried gcc on Cygwin, Solaris, and Linux (x86 and ia-64), Sun's cc,
IBM's xlc on AIX, and "cc" under OSF on an Alpha.
 
D

David Hopwood

CBFalconer said:
Jack Klein wrote:

... snip ...


For types other than bit-fields.

There really is no point in continuing to flog this dead horse. The
conclusion of this thread from a language user's point of view is that
compilers get it wrong, and therefore you should put in explicit casts
rather than relying on any particular promotion rule for bit fields.
 
L

lawrence.jones

In comp.std.c Jack Klein said:
A signed int 1 bit field can indeed only contain the values 0 and -1.

Unless the system in question uses ones' complement or sign/magnitude,
in which case it can only contain 0 and -0.

-Larry Jones

I sure like summer vacation. -- Calvin
 
C

CBFalconer

Kevin said:
In message <[email protected]>


Sorry. Must be being dense here. If INIT is -1, then the uc's and us's
initialise to 0xFF and 0xFFFF, respectively.
.... snip ...

What's your point?

Maybe I'll get around to slaving it to <limits.h> and making things
more obvious. The point is that the treatment of the bit field
varies with the values, and as a result UB may unexpectedly occur,
or at a minimum unexpected calculation results.
 
K

kuyper

Kevin Bracey wrote:
....
Would the two of you care to name the compilers? I'd certainly be interested
to see who's getting it right and wrong.

My results: SGI's MIPSpro C 7.2.1 promotes the bitfields to an unsigned
type - I've asked our sysadmins to file a defect report. gcc version
3.2.3 20030502 promotes them to a signed type.
 
K

Kevin Bracey

In message <[email protected]>
CBFalconer said:
Maybe I'll get around to slaving it to <limits.h> and making things
more obvious. The point is that the treatment of the bit field
varies with the values, and as a result UB may unexpectedly occur,
or at a minimum unexpected calculation results.

The treatment of the bitfield varies with the values in <limits.h>, you mean?
Well, yes, just as the treatment of the unsigned shorts does. The standard's
choice of promotion rule was (and is) a matter of debate, but the point is
that it's consistent. Both types of unsigned 16-bit number work the same way.
 
J

Joe Wright

CBFalconer said:
Maybe I'll get around to slaving it to <limits.h> and making things
more obvious. The point is that the treatment of the bit field
varies with the values, and as a result UB may unexpectedly occur,
or at a minimum unexpected calculation results.

Getting the bit field stuff wrong is probably harder than it looks. That
expressions of bit fields, chars and shorts (signed or unsigned) are
promoted to (signed) int is just the way of things. Learn it. Love it.

The Anomaly is if the unsigned value of any of these is greater than
INT_MAX it is promoted to unsigned int instead.

The good news is that it almost never happens except on systems where
sizeof char, short, and int is the same (1). I've heard of these things
(DSP's usually) but have never actually 'seen' one. Presumably all of
these values will promote to unsigned int.

In "My World(tm)" where char (8) is narrower than short (16) which is
narrower than int (32), I see little chance for Anomaly. A bit field
cannot be wider than int by definition. If narrower, we have no problem.
If the same width as int, I guess we have our Anomaly back due to an
idiot programmer.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,066
Latest member
VytoKetoReviews

Latest Threads

Top