signed char and unsigned char difference

D

dam_fool_2003

For int data type the default range starts from signed to unsigned. If
we don't want negative value we can force an unsigned value. The same
goes for long also.
But I don't understand why we have signed char which is -256. Does it
means that we can assign the same ASCII value to both signed and
unsigned. That means the ASCII value can be represented with a type of
signed char and also unsigned char?
For example
int main(void)
{
signed char a= 'a';
unsigned char b = 'b';
printf("%i %c",a,b);
return 0;
}

The above code does not warn about the assignment. I went through the
faq ,
Section 8 but I don't find the answer. Can any one give any pointer
regarding
the above subject?
 
J

Jens.Toerring

For int data type the default range starts from signed to unsigned. If
we don't want negative value we can force an unsigned value. The same
goes for long also.

Sorry, but these sentences don't make sense to me. For signed ints
the data range is INT_MIN to INT_MAX, for unsigned ints it's 0 to
UINT_MAX. INT_MIN must be at least -32767, INT_MAX +32767 and
UINT_MAX 65535. For long there are similar minimum ranges, with
"INT" replaced by "LONG" (i.e. LONG_MAX instead of INT_MAX) with
minimum requirements being LONG_MIN == -2^31-1, LONG_MAX == 2^31-1
and ULONG_MAX == 2^32-1. Implementations are allowed to support
larger ranges. The actual values can be found in said:
But I don't understand why we have signed char which is -256.

The range for signed chars is SCHAR_MIN to SCHAR_MAX. Quite often
(on machines with 8 bits in a char and 2's complement) this is the
range between -128 and +127. The range for unsigned char is 0 to
UCHAR_MAX (quite often this is 0 to 255). The ranges of -127 to
127 for signed and 0 to 255 for unsigned chars are the minimum
requirements, so you can be sure you can store numbers from these
ranges wherever you have a standard compliant C compiler. While
there probably are some machines where you also could store -256
in a signed char you shouldn't rely on this, on many machines it
won't work.
Does it
means that we can assign the same ASCII value to both signed and
unsigned. That means the ASCII value can be represented with a type of
signed char and also unsigned char?

Yes, since ASCII characters are all in the range between 0 and 127,
thus they can always be stored in a signed as well as an unsigned
char.
For example
int main(void)
{
signed char a= 'a';
unsigned char b = 'b';

There's nothing the compiler should complain about as long as you're
using ASCII (it's different with EBCDIC because there 'a' is 129 and
also most of the other letters are above 127, so you would better use
unsigned char).
Regards, Jens
 
T

Tim Prince

(e-mail address removed) wrote:

Yes, since ASCII characters are all in the range between 0 and 127,
thus they can always be stored in a signed as well as an unsigned
char.


There's nothing the compiler should complain about as long as you're
using ASCII (it's different with EBCDIC because there 'a' is 129 and
also most of the other letters are above 127, so you would better use
unsigned char).
It's been a while, but I once used a PRIMOS system where the default ASCII
representation had the high bit set.
 
D

Darrell Grainger

For int data type the default range starts from signed to unsigned. If
we don't want negative value we can force an unsigned value. The same
goes for long also.

First sentence doesn't make sense to me. The rest seems obviously true.
But I don't understand why we have signed char which is -256. Does it
means that we can assign the same ASCII value to both signed and
unsigned. That means the ASCII value can be represented with a type of
signed char and also unsigned char?

The first sentence of this paragraph makes no sense to me. There are
systems where a char is 16 bits. For these systems you can have a signed
char with a value of -256. Maybe the confusion is that char is not used to
hold characters. It can be used for that purpose but it can also be used
as an integer data type with a very small range of values. If you need to
save space and you never need anything outside the range of a char, then
use a char.

As to your question, the ASCII character set is in the range 0 to 127. A
signed char is typically in the range -128 to 127. An unsigned char is
typically in the range 0 to 255. The ASCII character set will fit in both
For example
int main(void)
{
signed char a= 'a';
unsigned char b = 'b';
printf("%i %c",a,b);
return 0;
}

If you give printf a %i it is expecting an int. You are passing it a
signed char. This will have undefind behaviour. Did you mean to use:

printf("%c %c\n", a, b);
 
K

Keith Thompson

There's nothing the compiler should complain about as long as you're
using ASCII (it's different with EBCDIC because there 'a' is 129 and
also most of the other letters are above 127, so you would better use
unsigned char).

On any system that uses EBCDIC as the default encoding, plain char
will almost certainly be unsigned.

To oversimplify slightly:

Use plain char to hold characters (the implementation will have chosen
an appropriate representation). Use unsigned char to hold bytes. Use
signed char to hold very small numeric values. (I actually haven't
seen much use for explicitly signed char.)
 
J

Jack Klein

First sentence doesn't make sense to me. The rest seems obviously true.


The first sentence of this paragraph makes no sense to me. There are
systems where a char is 16 bits. For these systems you can have a signed
char with a value of -256. Maybe the confusion is that char is not used to
hold characters. It can be used for that purpose but it can also be used
as an integer data type with a very small range of values. If you need to
save space and you never need anything outside the range of a char, then
use a char.

As to your question, the ASCII character set is in the range 0 to 127. A
signed char is typically in the range -128 to 127. An unsigned char is
typically in the range 0 to 255. The ASCII character set will fit in both


If you give printf a %i it is expecting an int. You are passing it a
signed char. This will have undefind behaviour. Did you mean to use:

No, he is not. One can't pass any type of char as argument to a
variadic function beyond the specified ones. The char 'a' will be
promoted to int and the behavior is perfectly defined.

Technically, passing unsigned char 'b' to printf() with a "%c"
conversion specifier could be undefined because:

1. The implementation might have UCHAR_MAX > INT_MAX (in other words,
UCHAR_MAX == UINT_MAX) and so 'b' will be converted to unsigned,
rather than signed, int.

2. The standard suggests, but does not require, that the signed and
unsigned integer types be interchangeable as function argument and
return types.

So this just could be undefined on a platform where the character
types have the same number of bits as int (there are some, believe me)
and unsigned ints are passed to variadic functions differently than
signed ints are.

I would not hold my breath waiting for such an implementation to
appear. From a QOI point of view it would be horrible.
 
G

Giorgos Keramidas

For int data type the default range starts from signed to unsigned. If
we don't want negative value we can force an unsigned value. The same
goes for long also.
True.

But I don't understand why we have signed char which is -256.

We don't.

The smallest value that fits in 8-bits (which is the minimal size a
signed char can hold, IIRC) is not -256 but -128. But your programs
shouldn't depend on that. Use SCHAR_MIN instead of an inline "magic"
value and you'll be fine ;-)
Does it means that we can assign the same ASCII value to both signed
and unsigned.

An unsigned char can hold values up to UCHAR_MAX. I'm not sure if
converting this value to signed char and back to unsigned will always
work as expected.
That means the ASCII value can be represented with a type of signed
char and also unsigned char?

No. SCHAR_MAX is usually 127 (if char values have 8-bits), which is
smaller than some of the values that unsigned characters can store.
For example

int main(void)
{
signed char a= 'a';
unsigned char b = 'b';
printf("%i %c",a,b);
return 0;
}

The above code does not warn about the assignment.

It depends on the warnings you have enabled. Here it doesn't even build
because printf() is called before a prototype is visible:

foo.c:5: warning: implicit declaration of function `printf'

Giorgos
 
O

Old Wolf

There's nothing the compiler should complain about as long as you're
using ASCII (it's different with EBCDIC because there 'a' is 129 and
also most of the other letters are above 127, so you would better use
unsigned char).

The type 'char' has to be able to represent all members of the basic
character set, which includes 'a'. If the machine had 8-bit chars and
'a' == 129, then it must have 'char' being unsigned.
 
P

Peter Nilsson

Jack Klein said:
(Darrell Grainger) wrote in comp.lang.c:

Unless plain char is signed, there's no requirement that the value of 'a'
fit within the range of signed char.
No, he is not. One can't pass any type of char as argument to a
variadic function beyond the specified ones. The char 'a' will be
promoted to int

This is ambiguous since a literal 'a' is already of int and no promotion
would be required. Of course, Jack is talking about the promotion of signed
and unsigned chars a and b respectively when used as arguments to printf.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,014
Latest member
BiancaFix3

Latest Threads

Top