Is there a integer data type of one byte in gcc?

P

Ptp

is there a integer data type of one byte in gcc?


I know delphi have fundamental integer types include byte, shortint...
the byte types is unsigned 8-bit, such as below:

procedure TForm1.Button1Click(Sender: TObject);
var
onebyte: byte;
^^^^

¡° ¨Ó·½:¡E¨}¬ü®a±ÚÁ`¹ë hiperfect.com¡E[FROM: localhost]
 
A

Andreas Kahari

is there a integer data type of one byte in gcc?


I know delphi have fundamental integer types include byte, shortint...
the byte types is unsigned 8-bit, such as below:

procedure TForm1.Button1Click(Sender: TObject);
var
onebyte: byte;
^^^^


I don't know about gcc, but standard C has the char type, which
is one byte by definition. One byte is CHAR_BIT bits (in
<limits.h>), usually 8.
 
D

Dan Pop

In said:
is there a integer data type of one byte in gcc?

Not one, but three, and not only in gcc but in any conforming C compiler:
char, signed char, unsigned char.

Dan
 
J

Joona I Palaste

Grumble said:
I think he meant 8 bits.

char is guaranteed to be at least 8 bits wide on every platform. If the
OP wants a type that is exactly 8 bits wide, then I would want to know
why.
Is uint8_t guaranteed to be 8 bits wide on every platform?

I'd figure so, on platforms that support it in the first place.

--
/-- Joona Palaste ([email protected]) ------------- Finland --------\
\-- http://www.helsinki.fi/~palaste --------------------- rules! --------/
"Outside of a dog, a book is a man's best friend. Inside a dog, it's too dark
to read anyway."
- Groucho Marx
 
P

pete

Andreas said:
I don't know about gcc, but standard C has the char type, which
is one byte by definition.

If one were going to use a one byte integer
for mathematical purposes, rather than for string characters,
I would suggest choosing either "signed char" or "unsigned char",
instead of plain "char".
 
A

August Derleth

is there a integer data type of one byte in gcc?

If gcc is a conformant C compiler (and it isn't always), it has three
types guaranteed to be at least eight bits wide: char, unsigned char,
and signed char.

What's the difference between signed and unsigned? One type can hold a
sign and has been modified in an implementation-specified way to do
it, the other cannot hold a sign and has behavior more stringently
defined by the Standard in the various edge cases. One of the edge
cases is wrap-around: If you do

#include <limits.h>

unsigned char foo(void)
{
unsigned char i = UCHAR_MAX;
i++;
return(i);
}

the return value from foo() is defined (it is 0). If, however, you do

#include <limits.h>

signed char bar(void)
{
signed char i = CHAR_MAX;
i++;
return(i);
}

the return value from bar() is implementation-dependent and is allowed
to be a trap representation (that is, one that sends the machine into
a tizzy).

So, if you want to treat your at-least-eight-bits-wide values as an
unsigned type with a strictly defined behavior in case of overflow,
use unsigned char. If you need to hold a sign, use signed char or
char.
 
C

CBFalconer

Dan said:
Not one, but three, and not only in gcc but in any conforming C
compiler: char, signed char, unsigned char.

At last a correct answer. However, of the three, using char in
arithmetical expressions is likely to lead to unexpected results.
 
J

Jirka Klaue

August Derleth wrote:
....
So, if you want to treat your at-least-eight-bits-wide values as an
unsigned type with a strictly defined behavior in case of overflow,
use unsigned char. If you need to hold a sign, use signed char or
char.

No, use signed char exclusively.

Jirka
 
J

Jack Klein

If gcc is a conformant C compiler (and it isn't always), it has three
types guaranteed to be at least eight bits wide: char, unsigned char,
and signed char.

[snip]

I know you didn't mean it this way, but it came out quite wrong.

A version of gcc or any other compiler conforming to just ANSI C89 has
at least 9 integer types guaranteed to be, as you put it, "at least
eight bits wide".

In addition to the three you mentioned, signed and unsigned short,
signed and unsigned int, and signed and unsigned long are guaranteed
to be at least eight bits wide. In fact they are guaranteed to be
more than eight bits wide. "more than" is a super set of "at least".

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++ ftp://snurse-l.org/pub/acllc-c++/faq
 
D

Dan Pop

In said:
At last a correct answer. However, of the three, using char in
arithmetical expressions

It's not as much using char in arithmetical expressions as it is using
char as an arithmetic type that can cover more than 0..127. Within this
range, plain char works like a charm.
is likely to lead to unexpected results.

If you're lucky. If you aren't, the unexpected results will show up
when someone else tries to use your program on a different platform. And
this is the usual case, because most people know the signedness of char
on their platform, so they don't get unexpected results.

Dan
 
J

John H. Guillory

char is guaranteed to be at least 8 bits wide on every platform. If the
OP wants a type that is exactly 8 bits wide, then I would want to know
why.


I'd figure so, on platforms that support it in the first place.
I'm not 100% sure that a char isn't *ALWAYS* 8-bits. For the most
part, writing C code to depend upon the size of a given data type is
rather risky. It can change at a moments notice with a compiler
upgrade. Everything's pretty much based off the int's being the
natural machine word size, which is 16 for DOS days, 32 for Windows'95
days, 64 for VAX VMS, and other mini-computers, etc... If you want a
language that can guarantee your data file will always be 20 megs per
record, program in COBOL. Otherwise, if you want a language that can
give you the power to write small and fast databases, then use C....
In order to make C work the same on so many platforms, C must be
flexible enough to be able to meet every computers needs, not just
intel based cpus.....
 
J

John H. Guillory

August Derleth wrote:
...

No, use signed char exclusively.
Actually, to be equivilent to DELPHI's "BYTE" you'd use unsigned
char. Pascal/Delphi has shortint types that are equivilent to signed
char.....
 
K

Keith Thompson

John H. Guillory said:
I'm not 100% sure that a char isn't *ALWAYS* 8-bits.

char isn't always 8 bits. It's guaranteed by the standard to be at
least 8 bits, but there are implementations (on DSPs, I think) on
which it's 16 or 32 bits. Some systems, mostly older ones, have, for
example, 9-bit bytes; I' don't know whether those systems have
conforming C implementations.
For the most part, writing C code to depend upon the size of a given
data type is rather risky. It can change at a moments notice with a
compiler upgrade. Everything's pretty much based off the int's
being the natural machine word size, which is 16 for DOS days, 32
for Windows'95 days, 64 for VAX VMS, and other mini-computers,
etc...

<OT><QUIBBLE>The VAX is a 32-bit system.</QUIBBLE></OT>
 
R

Robert Wessel

John H. Guillory said:
I'm not 100% sure that a char isn't *ALWAYS* 8-bits.


A char in C is guaranteed to be at least eight bits. It may well be
more. And on more than a few DSPs, it is. C99 uint8_t is always
exactly eight bits wide. If the implementation does not provide such
a type, type typedef for uint8_t will *not* exist. So using uint8_t
will always give you an unsigned 8-bit int, but will fail to compile
on any implementations not supporting such a type.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,763
Messages
2,569,562
Members
45,038
Latest member
OrderProperKetocapsules

Latest Threads

Top