ranges of unsigned int/long

N

Nicholas Zhou

Hi,

I was writing a testing program to test the ranges of char, short, int
and long variables on my computer, both signed and unsigned.

Everything was fine except for unsigned int and unsigned long. I got
0 to -1 for both. The expected answers should be:

unsigned int: 0 to 65535
unsigned long: 0 to 4294967295

What might be wrong here? Please help.

Here is the program I wrote:

-------------------

#include <stdio.h>
#include <limits.h>

int main(void)
{
printf("Unsigned char: %d\t%d\n", 0, UCHAR_MAX);
printf("Signed char: %d\t%d\n", SCHAR_MIN, SCHAR_MAX);
printf("Unsigned short: %d\t%d\n", 0, USHRT_MAX);
printf("(Signed) short: %d\t%d\n", SHRT_MIN, SHRT_MAX);
printf("Unsigned int: %d\t%d\n", 0, UINT_MAX);
printf("(Signed) int: %d\t%d\n", INT_MIN, INT_MAX);
printf("Unsigned long: %d\t%d\n", 0, ULONG_MAX);
printf("(Signed) long: %d\t%d\n", LONG_MIN, LONG_MAX);
getchar();
return 0;
}

-----------------
 
C

Clark S. Cox III

Nicholas said:
Hi,

I was writing a testing program to test the ranges of char, short, int
and long variables on my computer, both signed and unsigned.

Everything was fine except for unsigned int and unsigned long. I got
0 to -1 for both. The expected answers should be:

unsigned int: 0 to 65535
unsigned long: 0 to 4294967295

What might be wrong here? Please help.

You used the wrong conversion specifiers; i.e. for unsigned int, use %u,
long, use %ld, etc.)
 
U

user923005

#include <limits.h>
#include <stdio.h>
int main(void)
{
printf("Signed char : %20d\t%20d\n", SCHAR_MIN, SCHAR_MAX);
printf("Signed short : %20d\t%20d\n", SHRT_MIN, SHRT_MAX);
printf("Signed int : %20d\t%20d\n", INT_MIN, INT_MAX);
printf("Signed long : %20ld\t%20ld\n", LONG_MIN, LONG_MAX);
printf("Signed long long : %20lld\t%20lld\n", LLONG_MIN,
LLONG_MAX);
printf("Unsigned char : %20d\t%20u\n", 0, (unsigned)
UCHAR_MAX);
printf("Unsigned short : %20d\t%20u\n", 0,
(unsigned)USHRT_MAX);
printf("Unsigned int : %20d\t%20u\n", 0, (unsigned)UINT_MAX);
printf("Unsigned long : %20d\t%20lu\n", 0, (unsigned
long)ULONG_MAX);
printf("Unsigned long long : %20d\t%20llu\n", 0, (unsigned long
long) ULLONG_MAX);
return 0;
}
/* One possible output:
C:\tmp>foo
Signed char : -128 127
Signed short : -32768 32767
Signed int : -2147483648 2147483647
Signed long : -2147483648 2147483647
Signed long long : -9223372036854775808 9223372036854775807
Unsigned char : 0 255
Unsigned short : 0 65535
Unsigned int : 0 4294967295
Unsigned long : 0 4294967295
Unsigned long long : 0 18446744073709551615
*/
 
D

David T. Ashley

Nicholas Zhou said:
Hi,

I was writing a testing program to test the ranges of char, short, int and
long variables on my computer, both signed and unsigned.

Everything was fine except for unsigned int and unsigned long. I got
0 to -1 for both. The expected answers should be:

unsigned int: 0 to 65535
unsigned long: 0 to 4294967295

What might be wrong here? Please help.

Assuming a standard 2's complement machine (just about all of them nowadays,
I think), the reason you got -1 is that the bit pattern of all 1's in an
integer data type corresponds to:

a)The largest possible positive value, if the data is interpreted as an
unsigned type.

b)-1, if the value is interpreted as a signed type.

As other posters pointed out, the problem was the format specifier. The
format specifier caused interpretation as a signed type.

An integer in memory is just a collection of 0's and 1's. It can be either
unsigned or signed. It is all in how you interpret it.

2's complement has historically been used in computers because the same
addition and subtraction instructions give correct results for both unsigned
and signed interpretations. However, a little extra digital logic and a
couple of extra flags are required in the processor for correct branches and
so on.

The smallest practical example is 3 bits. Here are the values when
interpreted as unsigned and signed.

Bit Pattern Unsigned Signed

000 0 0
001 1 1
010 2 2
011 3 3
100 4 -4
101 5 -3
110 6 -2
111 7 -1

Notice that if N is the number of bits, the largest value corresponds to
2^N-1 as an unsigned or -1 as a signed.

This is all explained (I hope) here:

http://en.wikipedia.org/wiki/Twos_complement

One other thing you might notice is that the representations correspond to
the same values up until the sign bit is set. It is a not-uncommon problem
in 'C' to introduce a bug that only becomes apparent at large data values
because one somehow casts an unsigned to a signed. Everything works great.
Until 2^(N-1). Then all hell breaks loose.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,776
Messages
2,569,603
Members
45,187
Latest member
RosaDemko

Latest Threads

Top