how to define an 8 bit integer

D

DanielJohnson

I have seen many legacy code use uint8_t or uint16_t to override the
default compiler setting of an integer's length.

I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

I tried using uint8_t but the gcc doesn't like it. Do I need to modify
any setting or declare typdefes.

Please guide me. Your answer is greatly appreciated.

Thanks,
 
J

James Kuyper

DanielJohnson said:
I have seen many legacy code use uint8_t or uint16_t to override the
default compiler setting of an integer's length.

I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

I tried using uint8_t but the gcc doesn't like it. Do I need to modify
any setting or declare typdefes.

By default, gcc compiles for for a non-conforming version of C that is
sort of like C90 with many features that are similar, but not always
identical, to features that were added to C99. uint8_t was added in C99.
Use -std=c99 to turn on support for C99. Add -pedantic to come a little
closer to being fully conforming to C99.

If for any reason you can't use C99, use the following:

typedef unsigned char uint8;

If a compiler supports any unsigned 8-bit integer type, unsigned char
will be such a type. If the compiler has no 8-bit integer type,
'unsigned char' is going to be the best approximation possible for that
compiler.
 
A

Antoninus Twink

I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

I tried using uint8_t but the gcc doesn't like it. Do I need to modify
any setting or declare typdefes.

All these arcane portability issues have been thought of, solved, and
painfully debugged by the creators of things like the GNU autotools, so
why reinvent the wheel?

Look at the autoconf macros AC_TYPE_INT8_T, AC_TYPE_INT16_T,
AC_TYPE_INT32_T and AC_TYPE_INT64_T.
 
K

Keith Thompson

James Kuyper said:
By default, gcc compiles for for a non-conforming version of C that is
sort of like C90 with many features that are similar, but not always
identical, to features that were added to C99. uint8_t was added in
C99. Use -std=c99 to turn on support for C99. Add -pedantic to come a
little closer to being fully conforming to C99.

I think the problem is that he doesn't have "#include <stdint.h>".
Add that to the top of the file, and uint8_t becomes visible --
assuming the implementation provides <stdint.h>.

On my system, this works whether you use gcc's partial C99 mode or not
-- which is valid behavior, since it's a standard header in C99 and a
permitted extension in C90.

Incidentally, using uint8_t or uint16_t doesn't override anything.
The predefined types are still there, and their sizes don't change for
a given implementation. uint8_t and uint16_t, if they exist, are
nothing more than typedefs for existing predefined types (typically
unsigned char and unsigned short, respectively).

An implementation note: the <stdint.h> header isn't provided by gcc,
it's provided by the library. On my system, the library is glibc,
which does provide <stdint.h>. On another system, a different library
might not provide this header. I suspect that <stdint.h> will be
available on *most* modern implementations, but it's not guaranteed
unless the implementation claims conformance to C99.
 
M

Martin Ambuhl

DanielJohnson said:
I have seen many legacy code use uint8_t or uint16_t to override the
default compiler setting of an integer's length.

#include <stdint.h>
 
B

Bartc

Malcolm McLean said:

If he has lots of them (as in an array), then it might be useful to only
require a quarter or an eighth of the memory for example.

If he has to talk to some software/hardware that uses specific integer
widths then again it would be handy.
 
P

Phil Carmody

Malcolm McLean said:

Why do you need to know his reasons? Is it that you don't
believe him? Do you treat all posters with equal mistrust,
and do you expect others to treat you with the same mistrust?

He wants the ability to do the above. If he #includes stdint.h,
he'll have his wants most easily satisfied, no matter what
his reasons were. If he grabs a decent book in C, then he'll
probably have his wants satisfied far more quickly in the
future.

Phil
 
I

Ian Collins

Malcolm said:
Because the number of people who think they need integers of a certain
width is much greater than the number who actually do.
As Bartc pointed out, there can be good reasons for wanting a guaranteed
8-bit type, but they are rare.

Not in my world (that of a driver writer) or that of most embedded
programmers. Considering a large proportion of C programmers are
embedded programmers, the need for fixed width types is much greater
than you think.
 
C

CBFalconer

DanielJohnson said:
I have seen many legacy code use uint8_t or uint16_t to override
the default compiler setting of an integer's length.

I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

I tried using uint8_t but the gcc doesn't like it. Do I need to
modify any setting or declare typdefes.

Please guide me. Your answer is greatly appreciated.

uint8_t etc. are not guaranteed available. The guaranteed integer
types are char, short, int, long. C99 adds long long. These can
all be signed or unsigned.

Code using uint8_t is inherently non-portable. bytes can be larger
than 8 bits. See <limits.h> for the sizes available on your
system, expressed by MAX and MINs for the types.
 
W

Wolfgang Draxinger

DanielJohnson said:
I tried using uint8_t but the gcc doesn't like it. Do I need to
modify any setting or declare typdefes.

Using GCC, adding

#include <stdint.h>

should do the trick. Like already told by the other posters, it
may not be avaliable, so add some checks to your source, and
provide alternative ways to define those types.

In the end it will boild down to some typedefs from primitive
types, which have been exactly matched to target architecture
and compiler. That's how stdint.h works.

Wolfgang Draxinger
 
K

Keith Thompson

Tor Rustad said:
I haven't starting using C99 features yet, but I see that all here
talk about <stdint.h> for some weird reason. :)

For hosted systems, <inttypes.h> is the header file you are looking
for. In addition to including <stdint.h>, <inttypes.h> add macros and
useful conversion functions.
[...]

And if you don't happen to need those macros and conversion functions,
even on a hosted system, why not use <stdint.h>?
 
K

Keith Thompson

Tor Rustad said:
Keith said:
Tor Rustad said:
I haven't starting using C99 features yet, but I see that all here
talk about <stdint.h> for some weird reason. :)

For hosted systems, <inttypes.h> is the header file you are looking
for. In addition to including <stdint.h>, <inttypes.h> add macros and
useful conversion functions.
[...]
And if you don't happen to need those macros and conversion
functions,
even on a hosted system, why not use <stdint.h>?

IIRC, <stdint.h> was primary intended for free-standing environments,
so for hosted platforms, the recommendation here should IMO be using
<inttypes.h>.

So? That may have been the intent, but why should a programmer be
bound by that, or even influenced?

One standard header contains a few declarations. Another standard
header contains those same declarations plus some other stuff. If I
don't need the other stuff, what is the disadvantage of using the
first header?
If not needing those macros/functions at one point in time... you can
speed up the compilation by a tiny fraction. In practice, this
speed-up shouldn't matter, even when targeting embedded Linux.

Sure, there's nothing wrong with using <inttypes.h> if you want to.
I'm just saying that there's nothing wrong with using <stdint.h> if
you want to.
 
C

Charlton Wilbur

PC> Why do you need to know his reasons?

Because any experienced programmer who's helped others has run into XY
problems: the person asking for help needs to do X, and thinks
mistakenly that Y is the way to accomplish it. So the querent asks
about Y, and the newsgroup spends a lot of time going around in circles
because Y is really not the right solution to X, but because the querent
is asking about Y and not X, everyone's time is wasted.

Asking "Why do you want to do Y?" allows the respondents to say, "Aha!
That's not the best way to accomplish X -- you'll have a much easier
time of it if you try Z." If Y is the best way to do X -- it happens
occasionally -- then help with Y can proceed apace.

It's not a matter of distrust. Knowing *why* someone wants to do
something allows respondents to offer alternative solutions that may
well be better.

Charlton
 
R

Richard Tobin

Antoninus Twink said:
All these arcane portability issues have been thought of, solved, and
painfully debugged by the creators of things like the GNU autotools, so
why reinvent the wheel?

I've just switched a project to autoconf/automake, and the result is
not pretty. The support for non-gcc compilers is weak (how do I say I
want the highest level of warnings for whatever compiler turns out to
be available?). The previously short cc commands are now each several
lines long, making it hard to see warning messages before everything
has scrolled off the screen. I no longer get errors for undefined
functions until I run the program. Support for generated files
(except those produced by known programs like yacc) is poor, and
doesn't work properly with VPATH on some platforms. And each time I
try a new platform, I find a bunch of new things that I have to write
autoconf macros for.

On balance, it's probably worthwhile, and I don't mean to criticise
the authors, but prepare yourself for a lot of tedious messing around.

-- Richard
 
C

Chris M. Thomasson

James Kuyper said:
By default, gcc compiles for for a non-conforming version of C that is
sort of like C90 with many features that are similar, but not always
identical, to features that were added to C99. uint8_t was added in C99.
Use -std=c99 to turn on support for C99. Add -pedantic to come a little
closer to being fully conforming to C99.

If for any reason you can't use C99, use the following:

typedef unsigned char uint8;

[...]

I would add the following if you expect 32-bit systems...


typedef char test[
sizeof(char * 4) == 32 / CHAR_BIT
];


shi% happens.
 
C

Chris M. Thomasson

Chris M. Thomasson said:
James Kuyper said:
By default, gcc compiles for for a non-conforming version of C that is
sort of like C90 with many features that are similar, but not always
identical, to features that were added to C99. uint8_t was added in C99.
Use -std=c99 to turn on support for C99. Add -pedantic to come a little
closer to being fully conforming to C99.

If for any reason you can't use C99, use the following:

typedef unsigned char uint8;

[...]

I would add the following if you expect 32-bit systems...


typedef char test[
sizeof(char * 4) == 32 / CHAR_BIT
];

well, perhaps:

typedef char tester[
(sizeof(char) * 4 == 32 / CHAR_BIT) ? 1 : -1
];

MAN! I am a fuc%ing retard!!!!!!!!!!!!!!!!!
 
C

Chris Dollin

Richard said:
Chris M. Thomasson said:

I would add the following if you expect 32-bit systems...


typedef char test[
sizeof(char * 4) == 32 / CHAR_BIT
];

How does adding a syntax error help on 32-bit systems?

I think other-Chris mistyped (see their later posting).
And what's wrong with a simple assertion that CHAR_BIT is 8?

Assertions don't operate at compile-time, but the imploding
array trick does.

--
'Don't be afraid: /Electra City/
there will be minimal destruction.' - Panic Room

Hewlett-Packard Limited registered office: Cain Road, Bracknell,
registered no: 690597 England Berks RG12 1HN
 
P

Phil Carmody

Chris M. Thomasson said:
I would add the following if you expect 32-bit systems...

typedef char tester[
(sizeof(char) * 4 == 32 / CHAR_BIT) ? 1 : -1
];

sizeof(char) is somewhat redundant - it's defined to be 1.
So that's a check that CHAR_BIT is 7 or 8. (Of course,
7 is impossible.) So for that purpose if I absolutely had
to have such a trap, I'd just keep it simple (no divison,
no ?:)

typedef char tester[CHAR_BIT==8];

Ditto the 32-bit ints condition:

typedef char tester[CHAR_BIT*sizeof(int)==32];

However, it does look like some of the C++ Kool-Aid has
cross-polinated, and I can't say I particularly like
such bombs.

If you want to limit yourself to only using 32-bit ints,
why not code using exact-width integers. If such a type
can't be found, you'll find out at compile time, without
need for an obfuscation.

Phil
 
J

James Kuyper

Chris M. Thomasson wrote:
....
I would add the following if you expect 32-bit systems...


typedef char test[
sizeof(char * 4) == 32 / CHAR_BIT
];

I understand what you're trying to do there, but wouldn't a #if/#endif
pair bracking a #error directive do what you're trying to do in a much
clearer way? In a conforming mode, no compiler can omit the diagnostic
for an array length of 0, but it's perfectly free to accept the program
after issuing the diagnostic - I've used compilers with this "feature".
However, in conforming mode no compiler can accept a translation unit
containing a #error directive that survives conditional compilation.
 
N

Nick Keighley

because sometimes people aren't describing their actual problem.
They are describing the problem they are having with their chosen
solution instead.


are you always this obnoxious?
Not in my world (that of a driver writer) or that of most embedded
programmers.  Considering a large proportion of C programmers are
embedded programmers, the need for fixed width types is much greater
than you think.

I've dabbled with embedded systems and I also think "the number of
people who think they need integers of a certain width is much
greater
than the number who actually do".

I suspect the same applies to device drivers as well.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,900
Latest member
Nell636132

Latest Threads

Top