P
Peter Nilsson
Ark Khasin said:Peter said:Quoting pete:
#if !(1 & -1)
printf("ones complement\n");
#elif -1 & 2
printf("twos complement\n");
#else
printf("sign magnitude\n");
#endif
Pete asked if greycode or other weird representations
could be used for negative integers, but it seems that
is not so, despite the loose wording of C90.
[cf <http://groups.google.com/group/comp.std.c/msg/5f332b9aa22b92ec>]
Is there any assurance that the representation of integer
constants in the preprocessor is in any way related to the
representation of integer objects
- falls into one of the three models of representation of
integer objects
?
Notionally, yes.
C89 draft 3.8.1
The resulting tokens comprise the controlling constant
expression which is evaluated according to the rules of
$3.4 using arithmetic that has at least the ranges
specified in $2.2.4.2, except that int and unsigned int
act as if they have the same representation as,
respectively, long and unsigned long .
There's similar wording for C99 though intmax_t and uintmax_t
are used.
Unfortunately, many implementations are somewhat inconsistent.
Consider...
#include <limits.h>
#include <stdio.h>
int main(void)
{
printf("ULONG_MAX = %lu\n", ULONG_MAX);
#if -1 == -1ul
puts("-1 == -1ul [pre]");
#endif
if (-1 == -1ul)
puts("-1 == -1ul");
#if 4294967295 == -1ul
puts("4294967295 == -1ul [pre]");
#endif
if (4294967295 == -1ul)
puts("4294967295 == -1ul");
return 0;
}
The output for me using delorie gcc 4.2.1 with -ansi
-pedantic is...
ULONG_MAX = 4294967295
-1 == -1ul [pre]
-1 == -1ul
4294967295 == -1ul
As you can see, there is a discrepancy between the way
that preprocessor arithmetic is evaluated. Fact is,
gcc is not the only compiler to show problems.
[Of course the above can be repaired so as to not use the
preprocessor. Just asking...]
Indeed, using the expressions in a normal 'if' would be
the go. There's no reason why say int couldn't use 2c but
long uses 1c.