!!, what is it?

M

Michael Mair

Tim said:
Tim said:
On Mon, 12 Sep 2005 09:36:01 +0200,

As an aside:
I recall seeing some platform specific headers which went for the
all-bits-one representation of "true" -- but the C implementations
gave 1 for !!TRUE as well...

int is_it_seven(int x)
{
return x==7;
}

if(is_it_seven(7) == TRUE)
printf("7 is seven\n");
else
printf("7 is not seven\n");

While I would never write the explicit test for TRUE [1], I would be
horrified at any header that defined TRUE such that this code didn't
behave as expected.

[1] Other peoples coding standards excepted.

Sorry, without C99's _Bool and _True, this argument is bogus.
Apart from the possible range of return values one could expect from
is_it_seven() and the possible mismatch with, say isalnum() == TRUE
or strcmp() != TRUE, there is no benefit in that.

I KNOW there is no benefit in that. Thats why I wrote
"While I would never write the explicit test for TRUE"

But I would not pass review any C source that defined FALSE as anything
other than 0 and TRUE as anything other than 1 (or some equivalent
expression)

Infact, assuming I spotted it, I wouldn't accept any code that had a
function commented /* returns TRUE or FALSE */ unless the function
returned only 1 or 0 regardless of whether the macros TRUE and FALSE
were actually defined

We certainly agree on that and I did not mean to imply that you did
not know that there is no benefit.

However, this does not change the fact that "true" in C prior to
C99 is everything which is not zero. To underline this, I mentioned
this particular definition of true.

I still wouldn't have accepted it. There will have been a better name
for the macro.

Definitely.
Knowing a better way does not help all the time. I often
enough come across the mess of "somebody whose successor
left the company some years ago" or similar without the
opportunity to fix it (ROI). So, sometimes you have to
live with the way things have to be done and do the best
to wrap the ugliness and do it better.

No. (IMO) 6.7.2.1

9 A bit-field is interpreted as a signed or unsigned integer type ...

and then footnote 104) paraphrased - if the type specifier is int it is
implementation defined whether the bitfield is signed or unsigned

Which implies to me that signed int -> signed, unsigned int -> unsigned
and int goes to one or the other.

Thank you for your opinion on that. This is my reading too.
Unfortunately, I have been wrong often enough concerning the
standard.


Cheers
Michael
 
T

Tim Rentsch

Tim Woodall said:
On Tue, 13 Sep 2005 09:12:42 +0200,

No. (IMO) 6.7.2.1

9 A bit-field is interpreted as a signed or unsigned integer type ...

and then footnote 104) paraphrased - if the type specifier is int it is
implementation defined whether the bitfield is signed or unsigned

Which implies to me that signed int -> signed, unsigned int -> unsigned
and int goes to one or the other.

It is possible for a signed int bitfield to behave as though
it holds an unsigned int bitfield value.

On a one-bit bitfield, for example, an implementation could
define 0 as "zero" and 1 as "trap representation". Storing
any non-zero value into the bitfield would result in a trap
representation, which can do anything because of undefined
behavior. In particular, storing a 1 could later result in
a 1 being produced.

A multi-bit bitfield could have all "negative values" be
trap representations. Sort of weird, but it's allowed. At
least, I haven't found any language in the Standard that
disallows it. The particular case of a non-zero being
stored in a 1-bit bitfield is explicitly allowed (in section
6.2.6.2 p2); it does at least have to be explicitly
identified by the implementation.

Because of the wonders of undefined behavior, it's also
possible to mix and match. For example:

struct {
signed int a:1;
signed int b:1;
} bits;

bits.a = 1;
bits.b = -1;
printf( "a: %d b: %d\n", bits.a, bits.b );

could very well print "a: 1 b: -1" as its output. Something
like this might happen if a compiler did some dataflow
analysis and determined that the 'a' field is used to hold
1's and 0's and the 'b' field is used to hold 0's and -1's.
Once some evaluation results in a trap representation being
stored, any behavior is possible -- it doesn't have to be
consistent from variable to variable.
 
E

Emlyn Corrin

Michael Mair said:
IIRC, this "TRUE" was intended for bitwise operations and conveniently
fulfilled !TRUE == FALSE.

As would any other non-zero value,
and it won't fulfill !FALSE == TRUE.

Emlyn
 
C

Charlie Gordon

Tim Rentsch said:
It is possible for a signed int bitfield to behave as though
it holds an unsigned int bitfield value.

On a one-bit bitfield, for example, an implementation could
define 0 as "zero" and 1 as "trap representation". Storing
any non-zero value into the bitfield would result in a trap
representation, which can do anything because of undefined
behavior. In particular, storing a 1 could later result in
a 1 being produced.

This is quite unlikely, possibly even completely virtual at this point.
Could you give an example of compiler/cpu combination where this occurs ?
A multi-bit bitfield could have all "negative values" be
trap representations. Sort of weird, but it's allowed. At
least, I haven't found any language in the Standard that
disallows it. The particular case of a non-zero being
stored in a 1-bit bitfield is explicitly allowed (in section
6.2.6.2 p2); it does at least have to be explicitly
identified by the implementation.

Which version of the standard are you referring to ? Mine doesn't mention
bit-fields in 6.2.6.2 p2.
It doesn't disallow it but doesn't go into detail about the case of a signed
value consisting of just its sign bit (precision=0).
Could you please quote the paragraph?
Because of the wonders of undefined behavior, it's also
possible to mix and match. For example:

struct {
signed int a:1;
signed int b:1;
} bits;

bits.a = 1;
bits.b = -1;
printf( "a: %d b: %d\n", bits.a, bits.b );

could very well print "a: 1 b: -1" as its output. Something
like this might happen if a compiler did some dataflow
analysis and determined that the 'a' field is used to hold
1's and 0's and the 'b' field is used to hold 0's and -1's.
Once some evaluation results in a trap representation being
stored, any behavior is possible -- it doesn't have to be
consistent from variable to variable.

Really ? I would expect such a smart compiler to issue a warning about the
programmer's inconsistency.
Re-reading the Standard, I don't see any indication that supports your claim.
It is implementation defined whether int bitfields are interpreted as signed or
unsigned, but no language indicates any restriction regarding signed int
bitfields. whether these can represent just 0 or 0 and -1 is implementation
defined, but not undefined behaviour.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top