Syntax for union parameter

D

David Brown

"Rick C. Hodgin" <[email protected]> wrote in message

I thought it was odd too. Obviously just bolting on a set of typedefs
was simpler than adding them properly to the implementation.

You both seem to misunderstand that standards-defined headers, such as
<stdint.h>, /are/ part of the language. They are not "hacks" or
"add-ons" - the C standards say exactly what should and should not be in
<stdint.h>.

Why do you think it matters if int32_t is defined in a header that comes
as part of the toolchain, or if it is built in like "int"? The language
defines the type "int32_t", and says exactly what it should be. You use
it in the same way regardless of how it happens to be implemented.

It is common in C, and in a great many other languages, for features to
be implemented as part of the standard library rather than inside the
compiler itself. This greatly simplifies the implementation, while also
making it easier to make code compatible across different versions of
the language. (When using pre-C99 C, you can write your own int32_t
definitions - if it were built in like "int", you could not do so - or
at least, not as easily.)

And of course, you are both making wildly unsupported assumptions that
the types "int32_t", etc., are defined as typedefs of native types.

First, native types are /not/ variable-sized on any given platform. The
sizes are implementation-dependent - that means that the sizes can vary
between implementation, but within a given implementation, the sizes are
fixed. So the people writing <stdint.h> for any given toolchain know
/exactly/ what size "int", "short", "long", etc., are - and if those
sizes ever change, they change <stdint.h> to match.

Secondly, there is absolutely no requirement that these types be
implemented as typedefs (although obviously that is quite a common
method). The compiler is free to implement them as built-in types
(though it can't treat them as keywords, as it does for "int", "long"
and "short"), or using compiler-specific extensions. As an example, in
avr-gcc they are defined as typedefs but use a compiler-specific
extension rather than plain "int", "short", etc.:

(excerpt from <stdint.h> from avr-gcc-4.5.1)

typedef int int8_t __attribute__((__mode__(__QI__)));
typedef unsigned int uint8_t __attribute__((__mode__(__QI__)));
typedef int int16_t __attribute__ ((__mode__ (__HI__)));
typedef unsigned int uint16_t __attribute__ ((__mode__ (__HI__)));
typedef int int32_t __attribute__ ((__mode__ (__SI__)));
typedef unsigned int uint32_t __attribute__ ((__mode__ (__SI__)));
#if !__USING_MINT8
typedef int int64_t __attribute__((__mode__(__DI__)));
typedef unsigned int uint64_t __attribute__((__mode__(__DI__)));
#endif

The reason this is done is that this particular toolchain supports a
non-standard command-line flag to make "int" 8-bit. It is not often
used, but for some code it is vital to make the object code as small and
fast as possible (the AVR is 8-bit). So the header file uses this
compiler extension which gives exact type sizes.
 
D

David Brown

It would defy the logic of standards to define a "C" standard which
defines behavior, and then have a subsidiary "C-9bitter" standard that
breaks some of these requirements. That would say that "C-9bitter" IS NOT C.

C took the direct opposite approach, "C", the standard, established a
minimal set of guarantees that give you enough for most programming, and
allows auxiliary standards, like POSIX, to define that which the C
standard left undefined or implementation defined. (I don't believe
POSIX violates any of the C requirements). Thus in C, CHAR_BIT is >= 8,
while in POSIX, it is == 8, and so on.

It is also possible to add an "informal standard", something like, My
program assumes int is 32 bits and 2's complement, and everything that
comes from that (This sort of informal standard should be well
documented). You are thus defining that you program should not be used
on a compiler whose implementation defined characteristics do not match
your "standard" (and if you define your standard properly, and code
correctly, you should be able to use ANY compiler that meets that standard).

Note that the documentation here can often be written as:

#include <limits.h>

_Static_assert(CHAR_BIT == 8,
"This code is written with the assumption that char is 8-bits");
_Static_assert(sizeof(int) == 4,
"This code is written with the assumption that int is 32-bits");

This sort of thing makes the assumptions clear, and makes the code fail
to compile (with a clear error message) if the assumptions do not hold.

(If your compiler does not support C11 _Static_assert() yet, then it's
not hard to make own with a couple of macros. But the native static
assertion is the best choice if it is supported.)
 
K

Keith Thompson

Rick C. Hodgin said:
C doesn't doesn't have fixed types. Not even C99. It has an add-on
typedef hack which allows a person to use "fixed-size types" within its
variable-sized arsenal.

There is very little practical difference.
But as someone else pointed out, there are some
platforms which may not be able to implement those types because they are
not supported on the hardware, or in the compiler. In such a case, because
C does not require that 64-bit forms be supported, for example, now I'm
writing manual functions to overcome what C is lacking.

C requires long long to be at least 64 bits, and I have yet to see a
C compiler where it isn't exactly 64 bits.
 
K

Keith Thompson

Rick C. Hodgin said:
I think typedefs are brilliant. I include them in RDC. They have uses
galore and I use them directly and indirectly in my code.

I think it's beyond lunacy to typedef variable-sized native types to a
form that then provides explicit-sized types through typedefs when they
should be expressly provided for by the language, and not through the
typedef add-on hack.

You've said that you care about the CPU instructions generated for your
code.

Write a C program that uses, for example, uint32_t. Write an equivalent
program that uses unsigned int on a platform where that type is 32 bits
wide. Compare the generated code.

You obsess about things that don't matter.
 
I

Ian Collins

Keith said:
C requires long long to be at least 64 bits, and I have yet to see a
C compiler where it isn't exactly 64 bits.

Even if a platform were to have a long long bigger than 64 bits, there's
int64_t... So there really isn't anything lacking from C in this
context (unless the platform had something obscure like CHAR_BIT=9).
 
I

Ian Collins

Robert said:
The C implementation for the Unisys Clearpath (2200 series) systems,
which has 9 bit chars and ones complement arithmetic, has 72 bit long
longs.

It would be interesting to see Rick's manual functions for 64 bits ints
for that machine...
 
B

BartC

David Brown said:
On 08/02/14 23:19, BartC wrote:

You both seem to misunderstand that standards-defined headers, such as
<stdint.h>, /are/ part of the language. They are not "hacks" or "add-ons"

Why are they optional then? If I leave out stdint.h, I get: "error: unknown
type name 'int32_t'". Doesn't look like it's an essential, fundamental part
of the language!
Why do you think it matters if int32_t is defined in a header that comes
as part of the toolchain, or if it is built in like "int"? The language
defines the type "int32_t", and says exactly what it should be. You use
it in the same way regardless of how it happens to be implemented.

I might use 'int32_t' for example, to guarantee a certain bitwidth, when
using 'int' would be just too vague. What's odd is then finding out that
int32_t is defined in terms of int anyway!
It is common in C, and in a great many other languages, for features to
be implemented as part of the standard library rather than inside the
compiler itself.

It's less common for essential primitive types to be defined in a standard
library.
This greatly simplifies the implementation,

Not really. It's just another entry in a symbol table which happens to mean
the same as the equivalent int. However having them in the compiler *would*
simplify (1) the distribution by not needing stdint.h, and (2) a million
user programs which don't need to explicitly include stdint.h.
(When using pre-C99 C, you can write your own int32_t definitions - if it
were built in like "int", you could not do so - or at least, not as
easily.)

Except they wouldn't; they would use s32 or u32 (or i32 and u32 in my case).
'int32_t' et al are just too much visual clutter in a language which already
has plenty.
 
R

Rick C. Hodgin

Why are they optional then? If I leave out stdint.h, I get: "error: unknown
type name 'int32_t'". Doesn't look like it's an essential, fundamental part
of the language!

Exactly. You understand, Bart.

There is an external requirement in using stdint.h, one that is not part
of the compiler's built-in abilities. It is an add-on hack which was, as
you stated, the easier way to provide apparently bit-size compatibility
without really doing anything. And whereas it may be defined as part
of the official spec ... any developer could've arrived upon that solution
on their own (as I did, and as you indicate below you did with i32 and u32).
I might use 'int32_t' for example, to guarantee a certain bitwidth, when
using 'int' would be just too vague. What's odd is then finding out that
int32_t is defined in terms of int anyway!

Exactly! C is silly in all of its allowances (int will be at least 16-bits,
but can be anything really, just check with the specific implementation and
learn about your compiler author, his whims, his preferences, and the
underlying computer architecture, and then you will know what you're
dealing with ... oh, and if you want an explicit size ... then you're out
of luck unless your compiler is C99 compliant and/or includes the stdint.h
file, or unless you've manually determined that a particular type is a
particular size on a particular platform in a particular version for a particular build). "Oh, but it allows for faster executable code!"

I say in reply (with a very specific countenance, well known inflection,
and easily identifiable intonation): "Seriously?"
It's less common for essential primitive types to be defined in a standard
library.

Ding ding ding! That is the winning answer. :)
Not really. It's just another entry in a symbol table which happens to mean
the same as the equivalent int. However having them in the compiler *would*
simplify (1) the distribution by not needing stdint.h, and (2) a million
user programs which don't need to explicitly include stdint.h.
Precisely!


Except they wouldn't; they would use s32 or u32 (or i32 and u32 in my case).
'int32_t' et al are just too much visual clutter in a language which already
has plenty.

It's amazing to me, BartC. It's like there's this total division in what
people place value on occurring here. They either see it the C way (where
including an external file (or several to obtain other similar features),
one which uses a wholly clunky naming convention, is a good thing), or they recognize that such information should be part of the innate compiler without
any external requirement.

To be honest, I do not understand this disparity at all, though it resonates
in all great circles to our antipodal point of the universe.

Best regards,
Rick C. Hodgin
 
T

Tonton Th

answers, of course, but I would like to know what you think the coherent
approach to all these kinds of these things would have been.

Using another language, like Perl or INTERCAL.
 
J

James Kuyper

Why are they optional then? If I leave out stdint.h, I get: "error: unknown
type name 'int32_t'". Doesn't look like it's an essential, fundamental part
of the language!

To enable legacy code, which might contain conflicting definitions of
the same identifiers, to be compilable without any special handling.
Such code will generally NOT contain #include <stdint.h>, so no
modifications are needed for it to compile properly.

This isn't the only way they could have achieved that result, a new
standard #pragma would have worked, too. However, <stdint.h> can be
added to a existing implementation relatively easily - it's just normal
C code containing typedefs - the only tricky part is choosing the right
type for each typedef. Adding recognition of a new #pragma, and enabling
special handling of those identifiers depending upon whether or not that
#pragma had been turned on, is a bit more complicated.
 
B

Ben Bacarisse

And whereas it may be defined as part
of the official spec ... any developer could've arrived upon that solution
on their own (as I did,

I thought your solution was a duplicated set of typedefs that need to
checked manually? That's not what stdint.h is.

Ding ding ding! That is the winning answer. :)

Seriously? You know enough about what is common in a great many other
languages? I can't say you don't, but I've come to a different
conclusion from my limited experience.

Of course it's a winning answer in one respect -- it can't be wrong,
because any type so defined can be declared to be not primitive.

<snip>
 
B

BartC

James Kuyper said:
To enable legacy code, which might contain conflicting definitions of
the same identifiers, to be compilable without any special handling.
Such code will generally NOT contain #include <stdint.h>, so no
modifications are needed for it to compile properly.

There must be better ways of dealing with legacy code that don't require (1)
supporting everything that ever existed in the language and (2) new
additions to keep getting uglier and uglier (and longer) to avoid name
conflicts (eg '_Bool', 'uint64_t').

(Since most modules need stdio.h, a language version code could be applied
to that.)

(There could also be better ways of getting minimum and maximum type limits.
INT_MAX works, but suppose I have:

typedef int32_t T;

How do I get the maximum value of T in a tidy manner? What I'm saying is
that a trivial addition to the syntax could allow you to say: T'MAX or
short'MIN or long int'BITS, some attribute that can be applied to any type,
without needing include files full of sprawling lists of #defines. However
doing things elegantly doesn't seem to be the C style...)
 
J

Jorgen Grahn

The sheer number of things that are or appear to be undefined.

From my fairly narrow perspective, what's left undefined is more or
less what couldn't be defined without:
- a speed penalty on (once) popular architectures
- preventing C from running on architectures which seemed
to be popular and important
- forcing incompatible changes in things that are bigger than
the C language. E.g., it is (or used to be) hard to convince
Unix vendors to put more intelligence in the linker.

Frankly, I don't understand why people are so hung up the phrase
"undefined" nowadays. It wasn't like that until recently. Some kind
of fashion? Or is there an influx of Java refugees?

I don't think I ever saw it as anything but a necessity -- and I came
to C after having studied more pure and academical languages. I mean,
it's a fact: if you want to implement something like C pointer
arithmetic on (say) a 7MHz Motorola 68000 and you want it to be fast,
you cannot define what happens at out of bounds access. And so on.

/Jorgen
 
R

Rick C. Hodgin

I thought your solution was a duplicated set of typedefs

Duplicated in what way? With regards to stdint.h? Visual Studio 2008
doesn't come with stdint.h. And, from what I've read, even Visual
Studio 2010, which does come with stdint.h, uses u_int32_t, rather than
uint32_t, for unsigned, so a duplicate typedef is required for code
there as well. And, I'll admit that Microsoft's C99 support is lacking,
so no surprise there.
that need to
checked manually? That's not what stdint.h is.

Someone had to check the ones in stdint.h. And I would estimate also
that any self-respecting developer would check those. In fact, when
I run configure scripts on Linux source files to build some version of
an application, I almost always see "checking to see integer size" and
other similar messages during the build script.

People know these things vary like hairdos ... so they must always
test for them.
Seriously? You know enough about what is common in a great many other
languages? I can't say you don't, but I've come to a different
conclusion from my limited experience.

I've never had another language where fundamental data types are of
variable size. From assembly through Java, they are a known size.
Only in the land of C, the home of the faster integer for those
crucial "for (i=0; i<10; i++)" loops, do we find them varying in
size.
Of course it's a winning answer in one respect -- it can't be wrong,
because any type so defined can be declared to be not primitive.

Yes, the epitome of the bolt-on [strike]work[/strike]hack-around.

Best regards,
Rick C. Hodgin
 
B

Ben Bacarisse

BartC said:
(There could also be better ways of getting minimum and maximum type limits.
INT_MAX works, but suppose I have:

typedef int32_t T;

How do I get the maximum value of T in a tidy manner? What I'm saying is
that a trivial addition to the syntax could allow you to say: T'MAX or
short'MIN or long int'BITS, some attribute that can be applied to any
type, without needing include files full of sprawling lists of
#defines.

Yes, that would be handy. I think the C way would be to introduce new
operators like sizeof that can be applied to type names or expressions,
but the syntax is not the issue.
However doing things elegantly doesn't seem to be the C
style...)

The interesting question (to me) is what motivates some changes getting
into the language and not others. I think the committee is conservative
by nature, based, maybe, on a general sense of "it's been OK so far".
Maybe there is knock-on effect from C99 being ignored by Microsoft.

Perhaps the existence of C++ has taken the pressure off from endlessly
adding new features to C. Fortran, which is even older, has had no fork
of "Fortran++", so everything has gone into the core standard. I think
C benefits from not being C++ despite the fact that I use C++ every time
I actually want to get something done.
 
R

Robbie Brown

From my fairly narrow perspective, what's left undefined is more or
less what couldn't be defined without:
- a speed penalty on (once) popular architectures
- preventing C from running on architectures which seemed
to be popular and important
- forcing incompatible changes in things that are bigger than
the C language. E.g., it is (or used to be) hard to convince
Unix vendors to put more intelligence in the linker.

Frankly, I don't understand why people are so hung up the phrase
"undefined" nowadays. It wasn't like that until recently. Some kind
of fashion? Or is there an influx of Java refugees?

I've just completed my first project in C, very simple, very basic first
year computer science stuff, A 'Bounded Array' component and an
encapsulated Linked List and a Stack and FIFO Queue that 'extended' the
List by using only selected functions that 'make sense' in terms of
those latter two structures. Everything is done with pointers for
maximum flexibility, no assumption is made about the 'type' of data
stored and there are no hard coded 'magic numbers'

I spent 10% of the time figuring out the logic and writing code and 90%
of the time fretting about whether there was some arcane rule that I had
violated that meant that although the thing worked as far as I could
tell there was no way of actually knowing because unless I read and
inwardly digested the entire language spec something may be 'undefined' ...

This is my particular problem at the moment, something can appear to
work no matter how hard I test it yet still be 'undefined' in terms of
the language spec ... it just makes me feel uneasy.

Oh yes, and apart from never wearing brown shoes I don't 'do fashion'
I don't think I ever saw it as anything but a necessity -- and I came
to C after having studied more pure and academical languages. I mean,
it's a fact: if you want to implement something like C pointer
arithmetic on (say) a 7MHz Motorola 68000 and you want it to be fast,
you cannot define what happens at out of bounds access. And so on.

And how long do you think this can go on for? Will 'the committee' still
be determined to support a Babbage difference engine in 100 years time.
How can something progress and improve when it has to support everything
that ever existed?

Anyway, I look on this as an interesting academic exercise now, If I
want to be productive I'll use Java, if I want to really exercise the
old gray matter I keep on with C. I've just spent the day deconstructing
the (main) stack (frame) from 0x7fffffffefff all the way down to
0x7fffff7af000 ... I would NEVER have bothered before.

Not sure my wife is so enthusiastic though.
 
R

Rick C. Hodgin

Do you really not get this?

It's backwards, Dr Nick. It should be int is fixed, and fint is the fast
form which exists on a given platform. If I need speed, I use fint. If I
need a certain size, I use int. And if I want to do the research to find
out how big fint is so I can use it in lieu of int, then I can do that as
well, and then always use fint if it's big enough.

C is backwards. It is backwards in this area of unknown sizes, but minimum
allowed sizes, and it is backwards in the area of undefined behavior, where
it should have defined behavior and overrides.

That's my position. It's not just lip service. It's where I'm proceeding
from with my position about deficiencies in C, and in my efforts to build
RDC without those deficiencies.

Best regards,
Rick C. Hodgin
 
B

Ben Bacarisse

Rick C. Hodgin said:
Duplicated in what way?

Written out more than once. In the program I tried, it had them in the
source code of the test program. There are also used elsewhere so they
must be written out on at least one more place.

Someone had to check the ones in stdint.h.

Yes. Someone who knows all that needs to be know to get the definitions
right. And they do that once for all the thousands of developers who
use the definitions.

If you think this is the same as just writing out your own wherever they
are needed, well, I must bite my tongue.
And I would estimate also
that any self-respecting developer would check those.

Your estimation would be wrong, then.

I've never had another language where fundamental data types are of
variable size.

You've never heard of Fortran, Pascal, C++, Ada, Haskell or Python? Or
did you just not know enough about them to know what the various
standard say about the fundamental types?
From assembly through Java, they are a known size.
Only in the land of C, the home of the faster integer for those
crucial "for (i=0; i<10; i++)" loops, do we find them varying in
size.

No, not only in the land of C.

<snip>
 
B

Ben Bacarisse

Robert Wessel said:
Off the top of my head, Fortran, Pascal, Modula-2, Lisp, Python and
Ada are examples of languages where the sizes of primitive types are
defined by the implementation, subject to various restrictions.

Snap! (nearly.) I left Lisp off the list because the status of fixnum
has changed a bit over the years and I was not sure where things stood
right now, but I am sure you are right that it is not 100% prescribed.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,564
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top