Standard integer types vs <stdint.h> types

R

Richard Heathfield

Paul Hsieh said:
It also might be unnecessarily slow. You are letting the compiler
vendor make decisions for you.

int32_t i = 38700;

is also guaranteed to work, and is totally non-controversial about
what it means or is doing.

Provided, of course, that you are one of those lucky people who has a
conforming C99 compiler. I am not one such.

So 1) you are dismissing the possibility of using floating point.

Check the thread title.
2)
Either your bignum library has support for file offsets or you have
vowed not to support the platform specific extensions for dealing with
file offsets on your machine.

I haven't *vowed* any such thing, but in practice very little of my code
relies on platform-specific extensions. Some, obviously.
 
I

Ian Collins

Richard said:
Paul Hsieh said:


Provided, of course, that you are one of those lucky people who has a
conforming C99 compiler. I am not one such.
Or working on a POSIX compliant environment where these types are
required.
 
J

jacob navia

Richard said:
Paul Hsieh said:

Provided, of course, that you are one of those lucky people who has a
conforming C99 compiler. I am not one such.

You are using a gcc version prior to year 2000. You refuse to upgrade,
and then you say here:
> Provided, of course, that you are one of those lucky people who has a
> conforming C99 compiler. I am not one such.


Poor Heathfield. I will start crying now.
 
R

Richard Heathfield

jacob navia said:
You are using a gcc version prior to year 2000. You refuse to upgrade,

The version I have is C90-conforming. If gcc/glibc now conformed to C99,
that would be a good reason to upgrade. But it doesn't. So why bother?
 
P

Paul Hsieh

Paul Hsieh said:


If it matters (which it doesn't always), it is wise to select the ring
precisely, by applying appropriate mathematical operations.

You have to select the type. (Specifically you cannot select Z(2**64)
from just the operations you use -- you have to *declare* a 64 bit
type.)
Unsigned integers wrap around to 0 at U<TYPE>_MAX + 1. As long as I know
that, either it's good enough (in which case that's fine), or it isn't, in
which case I have to force the behaviour I want (see above).

This is just too narrow of an approach for me. Even something as
simple as an operation counter for a server used for well ordering and
fencing operations needs to be dealt with by *detecting* the wrap
around, not avoiding it.
For signed integers, overflow invokes undefined behaviour anyway, so the
matter doesn't arise.




Done that. Didn't need fixed size ints. Next question.

Well, then I certain don't need your big integer library. I have no
interest in any big integer library that isn't aware precisely the
details of its base integer type its written on top of. Anyone who
has written an *OPTIMIZED* big integer library should know the serious
performance impact this has.

Hint: column multiplying are measurably faster than row multiplying,
*but* will overflow if the number rows is too high -- hence you build
a multiplier that works up to a fixed number of rows which is related
to your base integer type, then apply Karatsuba or other methods to
for the arbitrary sizes. If you don't at least do that, then I am not
interested in your library (since I *have* dealt with this in mine.)
I'm trying to think of a use for less-than-36-bit primes, and failing.

Factoring maybe?
 
R

Richard Heathfield

Paul Hsieh said:
You have to select the type. (Specifically you cannot select Z(2**64)
from just the operations you use -- you have to *declare* a 64 bit
type.)

If you have C99, you can do that, using long long int. (And if like me you
don't, you can't guarantee that a 64-bit type is available.)

Well, then I certain don't need your big integer library.

No, of course you don't. But I'm fine with it, thanks.
I have no
interest in any big integer library that isn't aware precisely the
details of its base integer type its written on top of.

I use unsigned char, so I know I have at least 8 bits to play with in each
element of the array - which is plenty, thanks.

Anyone who
has written an *OPTIMIZED* big integer library should know the serious
performance impact this has.

Last time I measured its performance was about four years ago (when it
managed to do in just-about-zero-time a task that took five hours on
someone else's bignum. (It's okay, we straightened out why his was taking
so much time). I didn't write it for performance, mind you - I wrote it
for fun, trying to make the code as clear as possible - but it has always
been fast *enough* for any purpose I've found for it.
Factoring maybe?

Maybe. Not terribly convinced that the four bits of game are worth the
portability candle.
 
P

Paul Hsieh

Paul Hsieh said:




Provided, of course, that you are one of those lucky people who has a
conforming C99 compiler. I am not one such.

No, you just need int32_t, which is a lesser and more easily
obtainable requirement.
Check the thread title.

Floating point is a superset of integers. (Check your C lib
documentation for the modf() function if you don't believe me.) In
actual practice, I've found doubles as integers to actually be quite
useful to me (like knowing the exact number of seconds elapsed since
12:00AM 1 Jan 0 AD using the same type for knowing the exact number of
milliseconds since an application event).
I haven't *vowed* any such thing, but in practice very little of my code
relies on platform-specific extensions. Some, obviously.

Well I use big integers just for things like cryptography and other
really esoteric things. I tend to use large files a little bit more
often. When the need arises I would also like to have and use a 64
bit hash function (its not just a matter of scaling to the prevailing
bitness of machines, 32 bit hashes are just too small for some
applications because of the birthday paradox.)
 
J

jacob navia

Richard said:
I use unsigned char, so I know I have at least 8 bits to play with in each
element of the array - which is plenty, thanks.

Yes, VERY efficient.
You wrote it for fun, and look, it is really comic.

Like your bit string functions in "C unleashed"... a few macros
and "finished". The important thing is that it is portable.
If it is efficient, you do not care.

It is a design philosophy. I do not like it, and I think
that kind of software is maybe portable but it is not worth
spending any effort in it.

But here it is a matter of philosophy.
Anyone who

Last time I measured its performance was about four years ago (when it
managed to do in just-about-zero-time a task that took five hours on
someone else's bignum.

Who cares? There is ALWAYS something WORST :)

(It's okay, we straightened out why his was taking
so much time). I didn't write it for performance, mind you - I wrote it
for fun, trying to make the code as clear as possible - but it has always
been fast *enough* for any purpose I've found for it.

Sure.


Maybe. Not terribly convinced that the four bits of game are worth the
portability candle.

This obsession with "portability", as if the end user would care if
his program that takes forever to run is easy to port to some
washing machine or whatever.

Making a bignum library using chars is just ridiculous.
 
P

Paul Hsieh

Paul Hsieh said:


If you have C99, you can do that, using long long int. (And if like me you
don't, you can't guarantee that a 64-bit type is available.)

The way you determine if a 64 integer overflows on a 72 bit machine is
different from how you do it on a 64 bit machine (as in, the code you
write is different). If you want to just do it one way, it is helpful
if you have int64_t, not long long int.

And besides, for intents and purposes, you *DON'T* have C99.
No, of course you don't. But I'm fine with it, thanks.


I use unsigned char, so I know I have at least 8 bits to play with in each
element of the array - which is plenty, thanks.

Oh, LOL! Ok, so you have functional support that's useful for
integers sized maybe a few hundred bits. My needs are a little
different (I want to replace GMP in practical applications because its
not thread safe, and I was to be able to scale via lockless
primitives.)
[...] Anyone who
has written an *OPTIMIZED* big integer library should know the serious
performance impact this has.

Last time I measured its performance was about four years ago (when it
managed to do in just-about-zero-time a task that took five hours on
someone else's bignum. (It's okay, we straightened out why his was taking
so much time).

You wrote code that's not worse than the worst thing out there,
therefore its good enough? Well I also agree that insertion sort is
better than bubble sort, but ...
[...] I didn't write it for performance, mind you - I wrote it
for fun, trying to make the code as clear as possible - but it has always
been fast *enough* for any purpose I've found for it.
Factoring maybe?

Maybe. Not terribly convinced that the four bits of game are worth the
portability candle.

You are missing my point. Its not the 4 bits *above* 32 that are the
problem, its the 4 bits *BELOW* 40 that are the problem. If you feed
a 36 bit algorithm with a 40 bit value, then it might not function
correctly.

The point is that not having a precise idea of your integer types
means you can fail in *every* direction. Having precise integer types
just makes so many issues go away.
 
I

Ian Collins

Richard said:
Paul Hsieh said:


If you have C99, you can do that, using long long int. (And if like me you
don't, you can't guarantee that a 64-bit type is available.)
So you ignore native 64 bit types if they are available? You don't have
to have C99 to get 64 bit types.
 
C

CBFalconer

Ian said:
CBFalconer wrote:
.... snip ...


Isn't that what <stdint.h> does for you in a standardised form?

No, because the contents of stdint.h are optional, and you have to
pick one that is both suitable and present. If the machine doesn't
have a suitable type they aren't there. However the types are all
aliases for something in the byte, short, int, long, long long (and
unsigned) group. Use of limits.h allows you to select the optimum
for your situation at compile time, and set an equivalent.
Something like:

#if INT_MAX >= 214748367
# define THETYPE int
#else
# define THETYPE long
#endif

and now the rest of the code only sees the standard types (and
THETYPE). You also have the side benefit of not needing C99,
because C90 will work just fine.
 
C

CBFalconer

Paul said:
.... snip ...


It also might be unnecessarily slow. You are letting the compiler
vendor make decisions for you.

int32_t i = 38700;

is also guaranteed to work, and is totally non-controversial about
what it means or is doing.

No, it isn't. It will fail miserably on a C90 system. Instead all
you need is a quick macro somewhere that will define MYTYPE as int
or long, depending on values in limits.h. This is system
independant, avoids oversized beasts, etc. etc. It doesn't need
sizes that are multiples of 8 either. How are you going to handle
a machine with a 9 bit byte, an 18 bit short, 36 bit int, 72 bit
long? Or other.
 
C

CBFalconer

Ian said:
Richard Heathfield wrote:
.... snip ...


Or working on a POSIX compliant environment where these types are
required.

How does that remove the need for a C99 compiler? Besides which,
this is c.l.c, and POSIX is not specified in any C standard that I
have seen.
 
C

CBFalconer

Richard said:
Paul Hsieh said:
.... snip ...


I'm trying to think of a use for less-than-36-bit primes, and
failing.

How about a program to output all 35 bit primes? :) Now it can
avoid calculating all those pesky less than 35 bit primes. :)
 
R

Richard Heathfield

Paul Hsieh said:
Floating point is a superset of integers.

If that is true, then all integers are floating point numbers. You might
consider it worthwhile to indulge in such sophistry, but you just lost my
attention.

Well I use big integers just for things like cryptography and other
really esoteric things.

How nice. I'm not quite sure what's so esoteric about cryptography, and I
have found my bignum library to be perfectly adequate for algorithms such
as Diffie-Hellman RSA, which need pretty vast numbers if they're to be any
use.

Anyway, like I said, I'm out. If you want to discuss floating point in an
integer context, that's up to you, but I have more productive ways to
spend my time (like, for example, drinking coffee).
 
R

Richard Heathfield

jacob navia said:
Yes, VERY efficient.

No, not really. Nevertheless, of the dozen or so bignum libraries I've seen
around the place (including GMP and Miracl), it's probably about halfway
down the list in speed terms. Not super-fast, no, but at least it's not
mind-gratingly slow.
You wrote it for fun,
Right.

and look, it is really comic.

Do you often make such snap judgements about code you've never seen?
Like your bit string functions in "C unleashed"... a few macros
and "finished".

The purpose of a book like "C Unleashed" is to develop the reader's
understanding and knowledge, *not* to present a shrinkwrap product. The
book was already vast enough as it was. If we'd put much more in, we'd
have had complaints from the hospitals about the increased incidences of
back strain.
The important thing is that it is portable.

Readable, correct, portable, efficient. More or less in that order. I know
you're going to jump on me for putting readability before correctness, but
I maintain that it is easier to make a readable program correct than it is
to make a correct program readable. What's more, it doesn't matter how
fast the code runs on my machine if it doesn't run on my user's machine -
and, when writing a book on C, one has to bear in mind that the reader
could be using *any* platform. So yes, portability is very important to
me. Speed comes in a poor fourth, but at least it is in the top four. My
objective w.r.t. speed is that it should be "fast enough" - i.e. have no
blatantly obvious and significant inefficiencies. I generally try to
achieve this by using good algorithms.
If it is efficient, you do not care.

I care very much about efficiency. It's just that there are other things I
care about more.
It is a design philosophy. I do not like it, and I think
that kind of software is maybe portable but it is not worth
spending any effort in it.

You have the luxury of knowing which platform your users are using. I very
often don't.
Making a bignum library using chars is just ridiculous.

Really? Gosh.
 
R

Richard Heathfield

Ian Collins said:
So you ignore native 64 bit types if they are available?

If I'm writing portable code, yes, I do. It makes life so much simpler.
You don't have to have C99 to get 64 bit types.

Right. C places no upper limits on integer widths, and one Cray system came
within a hair's breadth of having 64-bit chars.

Let me ask you something - do you ignore 1024 bit types if they are
available? How about ignoring 131072 bit types if they are available? What
about 1048576 bit types?
 
I

Ian Collins

Richard said:
Ian Collins said:


If I'm writing portable code, yes, I do. It makes life so much simpler.
I guess that depends if performance is one of your design criteria.
Let me ask you something - do you ignore 1024 bit types if they are
available? How about ignoring 131072 bit types if they are available? What
about 1048576 bit types?
Well the platforms I work on are either POSIX compliant, or I use a
POSIX wrapper over the native API, which provides stdint.h.

My approach (shared by a number of opensource projects, libgd for
example) is to test for 64 bit types and use them at configuration or
build time if they are there.

Once systems with larger sizes become available and I have cause to use
them, I'll extend the tests to cover them. So the answer is no.
 
J

Jack Klein

Flash Gordon said:


Here at least, I can agree with you.


I would rather not have seen these types at all, since they seem to me to
enshrine poor practice. But I recognise that there are some spheres of
programming in which they could prove useful.

Really? Over the years I have seen near infinite (it seems) samples
of code with typedefs for U16, USHORT, and the like, not to mention
the very poorly thought-out ones that Microsoft came up with, which
are tied to a fixed bit size but don't include it in the name. So
now, a machine word on a Win32 machine is a DWORD.
I think he has a point. At the very least, it becomes *uglier* to read. C
as it stands, if well-written, is at least a relatively elegant language,
not just technically and syntactically but also visually. All these
stretched-out underscore-infested type names will be a real nuisance when
scanning quickly through unfamiliar code.

The problem many predominately desktop/hosted environment programmers
do not seem to understand how important being able to specify exact
integer types is in system programming and embedded systems, which are
the mainstays of C to this day.

They are not the prettiest integer typedefs I have ever seen, but they
have one overwhelming advantage -- they are in the C standard. So in
the future I won't have to look at someone's code and wonder what
types WORD and SWORD are, because I know what int32_t and uint32_t are
on sight.

ARM sells far more 32-bit cores every year than Intel does, and every
one of them is executing C. Even when their higher level applications
are in C++ or Java, both of them rest on a C framework.

Note I am not accusing anyone in particular of being a "predominately
desktop/hosted environment programmer". It is a general observation,
not aimed at anyone in particular.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
 
R

Richard Heathfield

Jack Klein said:

Note I am not accusing anyone in particular of being a "predominately
desktop/hosted environment programmer".

Accuse? My dear chap, it's an honour! :) I will quickly confess to being
precisely what you describe. Whilst I *have* written code specifically for
the embedded world, it forms only a tiny fraction of my experience, which
is mostly in what I still think of as the mainframe/mini/micro world.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

C99 integer types 24
Types 58
Types in C 117
Integer types in embedded systems 22
C99 stdint.h 20
Performance of signed vs unsigned types 84
ansi types 11
using pre-processor to count bits in integer types... 17

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,528
Members
45,000
Latest member
MurrayKeync

Latest Threads

Top