Datatypes

A

anjna22

if we simply write "char" or "int" , it is "signed char" or "signed
int".

but in some compiler we have to write explicitly "signed int ", why is
it so.
 
R

Richard Tobin

if we simply write "char" or "int" , it is "signed char" or "signed
int".

but in some compiler we have to write explicitly "signed int ", why is
it so.

No, plain char can be signed or unsigned, but plain int is always
signed. You might have to write signed char if you want a small
signed variable, but you never have to write signed int.

-- Richafrd
 
K

Keith Halligan

if we simply write "char" or "int" , it is "signed char" or "signed
int".

If you write "char", then it will generally have -128 <---> +127, A
signed char will have all negative values, up to -255. The similar is
true for int as well.
 
R

Richard Heathfield

Richard Tobin said:
No, plain char can be signed or unsigned, but plain int is always
signed. You might have to write signed char if you want a small
signed variable, but you never have to write signed int.

....except in bitfields.
 
R

Richard Heathfield

Keith Halligan said:
If you write "char", then it will generally have -128 <---> +127,

Generally, maybe, but not guaranteed.

On systems where plain char is signed by default (typical PCs), the
guarantee is -127 to +127. On systems where it is unsigned by default
(typical mainframes), the guarantee is 0 to 255.
A signed char will have all negative values, up to -255.

On a system where CHAR_BIT is 9, maybe - but of course it must also have
a corresponding number of positive values and at least one zero value.
More generally, a signed char will be able to represent all values in
the range SCHAR_MIN to SCHAR_MAX.
 
R

Richard Heathfield

(e-mail address removed) said:
Can anyone explain difference between char , signed dhar

signed char, presumably.

Okay: signed char is a signed integer type, unsigned char is an unsigned
integer type, and whether char is a signed integer type or an unsigned
integer type is entirely up to the implementation, but it must document
its choice.
 
J

Joachim Schmitz

Can anyone explain difference between char , signed dhar
char is implementation defined either signed or unsigned, check your
compiler's documantation
signed dhar is a typo... OK, just kidding: signed char is just that, at
least 8 bits wide and signed.
On some system they are different, on others they are the same.

Bye, Jojo
 
C

Chris Dollin

Keith said:
If you write "char", then it will generally have -128 <---> +127,

That depends on the implementation. Unmarked char can be represented
as [note: /not/ the same /type/ as] either signed char or unsigned
char, at the convenience of the implementation -- which probably
depends on what its underlying machine does on load-byte instructions.
A signed char will have all negative values, up to -255.

Not on almost all existing machines, it won't, since that would leave
no room for the C character set, whose elements are positive whatever
the signedness of char.
The similar is true for int as well.

That depends on what degree of similarity you choose.

--
Is it a bird? It is a plane? No, it's: http://hpl.hp.com/conferences/juc2007/
WARNING. Various parts of this product may be more than one billion years old.

Hewlett-Packard Limited registered office: Cain Road, Bracknell,
registered no: 690597 England Berks RG12 1HN
 
M

Mark L Pappin

Richard Heathfield said:
Richard Tobin said:
...except in bitfields.

In my universe, bitfields without 'signed' or 'unsigned' are just as
signed as any other 'int'.

mlp
 
M

Martin Ambuhl

if we simply write "char" or "int" , it is "signed char" or "signed
int".

Wrong. "int" is indeed a signed int, but unadorned "char" may be either
a signed char or an unsigned char.
but in some compiler we have to write explicitly "signed int ", why is
it so.

Because that compiler is broken.
 
B

Ben Pfaff

Mark L Pappin said:
In my universe, bitfields without 'signed' or 'unsigned' are just as
signed as any other 'int'.

You inhabit a different universe from the rest of us, then.
From C99 6.7.2:

...for bit-fields, it is implementation-defined whether the
specifier int designates the same type as signed int or the
same type as unsigned int.

Similar text is in C90.
 
M

Martin Ambuhl

Keith said:
If you write "char", then it will generally have -128 <---> +127, A
signed char will have all negative values, up to -255. The similar is
true for int as well.

Wrong. If you use unadorned char, it may have a range of at least -127
to +127, if the implementation treats it as signed _or_ a range of at
least 0 to 255, if the implementation treats it as unsigned.

If you write "char" and it is used for any purpose in which values
outside the range 0 to 127 are possible, then you are living dangerously.
 
R

Richard Heathfield

Mark L Pappin said:
In my universe, bitfields without 'signed' or 'unsigned' are just as
signed as any other 'int'.

The Standard disagrees with you, making it clear that this is
implementation-defined. Several cites are relevant here - here are two,
one from each Standard:

C89 3.5.2.1 "A bit-field may have type int , unsigned int , or signed
int. Whether the high-order bit position of a ``plain'' int bit-field
is treated as a sign bit is implementation-defined. A bit-field is
interpreted as an integral type consisting of the specified number of
bits."

C99 6.7.2(5) "[...] it is implementation-defined whether the specifier
int designates the same type as signed int or the same type as unsigned
int."
 
K

Keith Thompson

if we simply write "char" or "int" , it is "signed char" or "signed
int".

but in some compiler we have to write explicitly "signed int ", why is
it so.

What exactly do you mean by "we have to"? What happens if you write
"int" rather than "signed int"?

If they behave differently (other than in a bit field declaration),
the compiler is broken, but I'd be astonished if any compiler were
actually broken in that particular way. Can you show examples with
(short) real code and actual error messages?
 
P

Peter Nilsson

Richard Heathfield said:
Mark L Pappin said:

The Standard disagrees with you,

Perhaps he meant _observable_ universe.
making it clear that this is implementation-defined.
Several cites are relevant here - here are two,
one from each Standard:

C89 3.5.2.1 "A bit-field may have type int , unsigned
int , or signed int. Whether the high-order bit position
of a ``plain'' int bit-field is treated as a sign bit is
implementation-defined. A bit-field is interpreted as
an integral type consisting of the specified number of
bits."

C99 6.7.2(5) "[...] it is implementation-defined whether
the specifier int designates the same type as signed int
or the same type as unsigned int."

The former just means that C89 is backwards compatible
with K&R C (that didn't have unsigned). The latter just
means that C99 is backwards compatible with C89.

Personally, I think it's rediculous that this part is
preserved, but bitfields are still restricted to a handful
of integer types. I can't recall ever using a compiler
that implemented the former or restricted the
available integer types for bitfields.

Obviously I haven't turned 60 yet. ;-)
 
M

Mark L Pappin

Ben Pfaff said:
You inhabit a different universe from the rest of us, then.
From C99 6.7.2: ....
Similar text is in C90.

My bad. Thanks (and to Richard too) for the references.

mlp
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,581
Members
45,056
Latest member
GlycogenSupporthealth

Latest Threads

Top