signed char range how...??

N

naunetr

hi all,

i'm reading the "tomsweb" tutorial here
http://cprog.tomsweb.net/cintro.html and it says that range of signed
char is -127...+127. but this prog prints -128...127. also my textbook
says -128...127 too. what is correct? thanks.

#include <stdio.h>
main()
{
signed char SignedCharVar;
SignedCharVar = 0;

while(1)
{
printf("%d, ", SignedCharVar);
SignedCharVar = SignedCharVar+1;
}
}
 
P

Peter Nilsson

naunetr said:
hi all,

i'm reading the "tomsweb" tutorial here
http://cprog.tomsweb.net/cintro.html and it says that range
of signed char is -127...+127.

"C supports the following types: (note that the ranges
indicated are minimum ranges, they may (and will!) be
larger so you shouldn't rely on them having a certain
size)

"Integer types: (non-fractional numbers)

"signed char minimum range: -127..+127"
but this prog prints -128...127.

Your program is flawed. Just do

printf("%d..%d\n", SCHAR_MIN, SCHAR_MAX);
also my textbook says -128...127 too.

That is probably based on the assumption of a vanilla two's
complement 8-bit char machine.
what is correct?

The standard. Tom's discussion of limits.h is just missing
the preamble given earlier.
 
J

J. J. Farrell

naunetr said:
i'm reading the "tomsweb" tutorial here
http://cprog.tomsweb.net/cintro.html and it says that range of signed
char is -127...+127. but this prog prints -128...127. also my textbook
says -128...127 too. what is correct? thanks.

#include <stdio.h>
main()
{
signed char SignedCharVar;
SignedCharVar = 0;

while(1)
{
printf("%d, ", SignedCharVar);
SignedCharVar = SignedCharVar+1;
}
}

A signed char must be able to hold values in the range -127 to 127. The
actual range varies from compiler to compiler, and can be larger than
this by any amount.
 
S

santosh

naunetr said:
hi all,

i'm reading the "tomsweb" tutorial here
http://cprog.tomsweb.net/cintro.html and it says that range of signed
char is -127...+127. but this prog prints -128...127. also my textbook
says -128...127 too. what is correct? thanks.

The Standard has to take into account non-twos-complement architectures
too. Your book probably assumes a twos-complement representation.
#include <stdio.h>
main()

Implicit return of int is disallowed. Have main return an int and
declare appropriately.
{
signed char SignedCharVar;
SignedCharVar = 0;

while(1)
{
printf("%d, ", SignedCharVar);
SignedCharVar = SignedCharVar+1;

In Standard C overflow invokes undefined behaviour, so any result is
possible for your program. Just use the macros defined in limits.h.
That's what they are for.
 
P

Philip Potter

santosh said:
The Standard has to take into account non-twos-complement architectures
too. Your book probably assumes a twos-complement representation.

Even twos-complement is allowed to use all-bits-one as a trap
representation, and thus have range -127..127.

Phil
 
J

James Kuyper

Philip Potter wrote:
....
Even twos-complement is allowed to use all-bits-one as a trap
representation, and thus have range -127..127.

That is debatable, and has been debated on comp.std.c. Not that I
disagree; I was arguing your side in that debate. However, the other
side interpreted the wording of the standard as allowing the
representation that would otherwise represent negative zero to be a trap
representation, but permitting no other trap representations.

I was arguing that it is permitted for an implementation to declare a 32
bit int type to have INT_MIN = -32768 and INT_MAX=2147483647, by
treating all bit patterns that would otherwise represent values from
-2147483648 to -32769 to be trap representations. This would break a lot
of existing code, and I doubt that this was the committee's intent, but
I don't see it as violating any actual requirements in the standard.
 
P

Philip Potter

James said:
Philip Potter wrote:
...

That is debatable, and has been debated on comp.std.c. Not that I
disagree; I was arguing your side in that debate. However, the other
side interpreted the wording of the standard as allowing the
representation that would otherwise represent negative zero to be a trap
representation, but permitting no other trap representations.

I was arguing that it is permitted for an implementation to declare a 32
bit int type to have INT_MIN = -32768 and INT_MAX=2147483647, by
treating all bit patterns that would otherwise represent values from
-2147483648 to -32769 to be trap representations. This would break a lot
of existing code, and I doubt that this was the committee's intent, but
I don't see it as violating any actual requirements in the standard.

I can't see how there could be any debate, at least as far as n1256 is
concerned. From 6.2.6.2p2:

"Which of these [sign-magnitude, 1's-comp or 2's-comp] applies is
implementation-defined, as is whether the value with sign bit 1 and all
value bits zero (for the first two), or with sign bit and all value bits
1 (for ones’ complement), is a trap representation or a normal value. In
the case of sign and magnitude and ones’ complement, if this
representation is a normal value it is called a negative zero."

"If the sign bit is zero, it shall not affect the resulting value. If
the sign bit is one, the value shall be modified in one of the following
ways:

— the corresponding value with sign bit 0 is negated (sign and magnitude);
— the sign bit has the value -(2N) (two’s complement);
— the sign bit has the value -(2N - 1) (ones’ complement).

Which of these applies is implementation-defined, as is whether the
value with sign bit 1 and all value bits zero (for the first two), or
with sign bit and all value bits 1 (for ones’ complement), is a trap
representation or a normal value. In the case of sign and magnitude and
ones’ complement, if this representation is a normal value it is called
a negative zero."

It seems quite clear that binary 10000000 is a possible trap
representation for signed char under twos complement.

Phil
 
J

James Kuyper

Philip said:
I can't see how there could be any debate, at least as far as n1256 is
concerned. From 6.2.6.2p2:

" ....
Which of these applies is implementation-defined, as is whether the
value with sign bit 1 and all value bits zero (for the first two), or
with sign bit and all value bits 1 (for ones’ complement), is a trap
representation or a normal value. In the case of sign and magnitude and
ones’ complement, if this representation is a normal value it is called
a negative zero."

It seems quite clear that binary 10000000 is a possible trap
representation for signed char under twos complement.

You're right; I incorrectly saw your statement as an extension of the
issue I was talking about. I forgot that your particular case was
covered explicitly, which makes it unrelated to my issue.
 
P

Philip Potter

Philip said:
Even twos-complement is allowed to use all-bits-one as a trap
representation, and thus have range -127..127.

Sorry, I meant sign-bit-one, value-bits-zero.

Phil
 
F

Flash Gordon

Peter Nilsson wrote, On 19/11/07 02:46:
"C supports the following types: (note that the ranges
indicated are minimum ranges, they may (and will!) be
larger so you shouldn't rely on them having a certain
size)


The standard. Tom's discussion of limits.h is just missing
the preamble given earlier.

No, Tom's discussion is not missing the preamble, it is just the OP
missed it. Specifically it says:
| C supports the following types: (note that the ranges indicated are
| minimum ranges, they may (and will!) be larger so you shouldn't rely
| on them having a certain size)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top