is "typedef int int;" illegal????

D

David R Tribble

Jordan said:
The keyword "int" is not allowed as part of its type name, therefore it
is arguable that it is not an "int type" despite being an "integer
type".

Exactly. Yes, 'char' is an integer type of C, but it's not an 'int'
type (because 'int' is not allowed as part of its type name).

Doesn't matter anyway; my point is still true that all C compilers
to date (at least those I'm aware of) support two standard integer
types of identical size. Three out of four or four out of five, either
way, there appears to always be a redundant type.

-drt
 
E

Eric Sosman

David R Tribble wrote On 03/27/06 15:23,:
Exactly. Yes, 'char' is an integer type of C, but it's not an 'int'
type (because 'int' is not allowed as part of its type name).

Doesn't matter anyway; my point is still true that all C compilers
to date (at least those I'm aware of) support two standard integer
types of identical size. Three out of four or four out of five, either
way, there appears to always be a redundant type.

Isn't the Alpha under OSF/1 (already mentioned) a
counterexample? It's "four out of four" (or "three out
of three" if you count un-char-itably). If you want to
look from the other angle, it has no "redundant" type.
 
K

kuyper

Jordan said:
The keyword "int" is not allowed as part of its type name, therefore it
is arguable that it is not an "int type" despite being an "integer
type".

Perhaps; but why bother talking about 'int' types in the first place?
Why not discuss "integer" types instead?
 
E

Eric Sosman

Jordan Abel wrote On 03/27/06 15:07,:
a 128-bit word size might make sense, though, for a specialized system
that is intended to mainly work with high-precision floating point,
though. [...]

Not a mere theoretical possibility: DEC VAX supported
four floating-point formats, one of which (H-format) used
128 bits. The small-VAX models I used implemented H-format
with trap-and-emulate, but it was part of the instruction
architecture nonetheless and in that sense a "native" form.
 
K

Keith Thompson

Eric Sosman said:
Isn't the Alpha under OSF/1 (already mentioned) a
counterexample? It's "four out of four" (or "three out
of three" if you count un-char-itably). If you want to
look from the other angle, it has no "redundant" type.

Alpha OSF/1 has the following:

char 8
short 16
int 32
long 64
long long 64

It has no redundant type only if you ignore C99.

In any case, redundant types aren't necessarily a bad thing. The
standard guarantees a minimum range for each type, and requires a
reasonably large set of types to be mapped onto the native types of
the underlying system. Having some types overlap is better than
leaving gaps.
 
J

Jordan Abel

Jordan Abel wrote On 03/27/06 15:07,:
a 128-bit word size might make sense, though, for a specialized system
that is intended to mainly work with high-precision floating point,
though. [...]

Not a mere theoretical possibility: DEC VAX supported
four floating-point formats, one of which (H-format) used
128 bits. The small-VAX models I used implemented H-format
with trap-and-emulate, but it was part of the instruction
architecture nonetheless and in that sense a "native" form.

I'm talking about a hypothetical machine that used 128 bits for
everything, as some allegedly now use 32 bits for everything.
 
J

Jordan Abel

The language makes no such distinction.

We have short ints, long ints, and no char ints. that's a language
distinction if there ever was one. "int type" isn't really a term
defined by the language anyway, and arguably one plausible definition is
"types declared using the keyword 'int'".
 
K

Keith Thompson

Jordan Abel said:
We have short ints, long ints, and no char ints. that's a language
distinction if there ever was one. "int type" isn't really a term
defined by the language anyway, and arguably one plausible definition is
"types declared using the keyword 'int'".

We also have "short", "unsigned short", "unsigned", "long", "unsigned
long", etc.

If I wanted to define the term "int type", I suppose "any type that
*can* be declared using the keyword 'int'" might be a plausible
definition. However, the standard doesn't define such a term (any
more than it groups long, unsigned long, long long, unsigned long
long, and long double as "long types").

I see absolutely no point either in defining such a term or in
continuing this discussion.
 
R

RSoIsCaIrLiIoA

It's interesting to note that most implementations (all of them I've
ever seen, in fact) only provide three of the four standard int type
sizes, with two of the four being the same size. For example,
consider the following typical choices of type sizes for various
CPU word sizes:

word | char | short| int | long | long long
-----+------+------+------+------+----------

data structure and its size effect heavy for portability (because when
i has the data of the same size & ^ | etc for them should be all well
definite the same) so all problem on portability disappear

so to use char, int, short, long, etc is an error if someone sees
portability for a program.
they had been int8, int16, int32, etc (until char int8) from the day 1
uns8, uns16 uns32 etc
the problem could be that different cpu has different 'main' word size
and this effect in efficience
 
D

Douglas A. Gwyn

Stephen said:
That "long long" even exists is a travesty.

Hardly. The need for something along those lines was so pressing
that different compiler vendors had invented a variety of solutions
already, including some using "long long".
What are we going to do when 128-bit ints become common in another couple
decades?

Use int_least128_t if you need a standard name for a signed int
with width at least 128 bits. If you don't know what that is,
here's an opportunity to learn.
 
D

Douglas A. Gwyn

Keith said:
Mathematically, they're called "Gaussian integers".

And like most specialized types there isn't strong reason
to build them into the language (as opposed to letting the
programmer use a library for them). Probably floating-
complex should have been in that category, were it not for
established Fortran practice.
 
D

Douglas A. Gwyn

jacob said:
lcc-win32 supports 128 bit integers. The type is named:
int128

We hope you defined the appropriate stuff in <stdint.h>
and <inttypes.h>, since that is what portable programs
will have to use instead of implementation-specific names.

Note also that you have made lcc-win32 non standards
conformant. You should have used an identifier reserved
for use by the C implementation, not one that is
guaranteed to be available for the application.
 
K

kuyper

Douglas said:
Hardly. The need for something along those lines was so pressing
that different compiler vendors had invented a variety of solutions
already, including some using "long long".

It's not "something along those lines" which was a travesty. A
size-named type like the ones that were introduced in C99 would have
been much better. It's specifically the choice of "long long" for the
type name that made it so objectionable.
 
D

David R Tribble

Keith said:
We also have "short", "unsigned short", "unsigned", "long", "unsigned
long", etc.

If I wanted to define the term "int type", I suppose "any type that
*can* be declared using the keyword 'int'" might be a plausible
definition. However, the standard doesn't define such a term (any
more than it groups long, unsigned long, long long, unsigned long
long, and long double as "long types").

I see absolutely no point either in defining such a term or in
continuing this discussion.

Sorry for the confusion.

But like I said, it doesn't change my point, that all C compilers I've
ever seen have a redundant integer type size.

By itself, this is not necessarily a bad thing, but it does make
writing portable code a headache sometimes. I'm still waiting for
a standard macro that tells me about endianness (but that's
a topic for another thread).

-drt
 
D

David R Tribble

It's not "something along those lines" which was a travesty. A
size-named type like the ones that were introduced in C99 would have
been much better. It's specifically the choice of "long long" for the
type name that made it so objectionable.

Type names like 'long long' have the advantage of being decoupled
from the exact word size of the underlying CPU. That's why you
can write reasonably portable code for machines that don't have
nice multiple-of-8 word sizes.

Some programmers may prefer using 'int_least64_t' over 'long long'.
But I don't.

-drt
 
K

Keith Thompson

It's not "something along those lines" which was a travesty. A
size-named type like the ones that were introduced in C99 would have
been much better. It's specifically the choice of "long long" for the
type name that made it so objectionable.

None of the predefined integer types (char, short, int, long, long
long) have names that specify their actual sizes, allowing the sizes
to vary across platforms. Only minimum sizes are specified. This
encourages code that doesn't assume specific sizes (though there's
still plenty of code that assumes "all the world's a VAX", or these
days, "all the world's an x86". Introducing a new fundamental type
with a size-specific name would break that pattern, and could break
systems that don't have power-of-two sizes (vanishingly rare these
days, but the standard still allows for them).
 
W

Wojtek Lerch

David R Tribble said:
I'm still waiting for
a standard macro that tells me about endianness (but that's
a topic for another thread).

One macro, or one per integer type? C doesn't disallow systems where some
types are big endian and some little endian.

C doesn't even disallow "mixed endian" -- any permutation of bits is OK.
Would you just classify those as "other", or do you have something more
complicated in mind? Or would you just ban them?

And what about padding bits -- how useful is it to know the endianness of a
type if you don't know where its padding bits are?
 
D

Douglas A. Gwyn

... It's specifically the choice of "long long" for the
type name that made it so objectionable.

Why is that objectionable? It avoided using up another
identifier for a new keyword, did not embed some assumed
size in its name (unlike several extensions), and
matched the choice of some of the existing extensions.
 
D

David R Tribble

Wojtek said:
One macro, or one per integer type? C doesn't disallow systems where some
types are big endian and some little endian.

C doesn't even disallow "mixed endian" -- any permutation of bits is OK.
Would you just classify those as "other", or do you have something more
complicated in mind? Or would you just ban them?

And what about padding bits -- how useful is it to know the endianness of a
type if you don't know where its padding bits are?

Something along the lines of:
http://david.tribble.com/text/c9xmach.txt

This was written in 1995, before 'long long' existed, so I'd have
to add a few more macros, including:

#define _ORD_LONG_HL n

My suggestion is just one of hundreds of ways to describe
endianness, bits sizes, alignment, padding, etc., that have been
invented over time. None of which ever made it into ISO C.

-drt
 
K

kuyper

Keith said:
None of the predefined integer types (char, short, int, long, long
long) have names that specify their actual sizes, allowing the sizes
to vary across platforms. Only minimum sizes are specified.

In other words, the built-in types were roughly equivalent to
int_leastN_t or int_fastN_t. I definitely approve of types that are
allowed to have different sizes on different platforms. I think that
they are, by far, the most appropriate types to use in most contexts.

However, while using english adjectives as keywords to specify the
minimm size seemed reasonable when the number of different sizes was
small, it has become steadily less reasonable as the number of
different sizes has increased. The new size-named types provide a more
scalable solution to identifying the minimum size. Were backward
compatibility not an issue, I'd recommend abolishing the original type
names in favor of size-named types. I wouldn't recommend the current
naming scheme for the new types, however - intN_t should have been used
for the fast types, with int_exactN_t being reserved for the
exact-sized types.
This
encourages code that doesn't assume specific sizes (though there's

The same benefit accrues to the non-exact-sized size-named types.
days, "all the world's an x86". Introducing a new fundamental type
with a size-specific name would break that pattern, and could break
systems that don't have power-of-two sizes (vanishingly rare these
days, but the standard still allows for them).

You're assuming that the size-specific name would identify an
exact-sized type rather than a minimum-sized type. I would not approve
of that solution any more than you would, for precisely the reasons you
give.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,219
Latest member
KristieKoh

Latest Threads

Top