Syntax for union parameter

Discussion in 'C Programming' started by Rick C. Hodgin, Jan 29, 2014.

  1. Am I to understand that C itself does not even have the defined types as
    first class citizens? That's what the _t means, isn't it? That it's not
    a native type, but it is some extension provided for by an include file.
    [Virtual pause]. I just went and looked at the file found on this page
    as an add-on for the missing include file in Visual Studio ... it is
    nothing but a bunch of typedefs, and a lot of other things I'd doubtfully
    ever use.

    http://code.google.com/p/msinttypes/

    Well ... I did the same thing without knowing about int32_t and so on
    because Visual Studio 2008 and earlier do not have that file.
    I have brought forward my definition style from older C compilers (before
    1999), and I have always had those typedef forms working. I've had to
    tweak them in various versions to get it to work, but the resulting code
    I had written using s32 and u32 never had to be changed.
    RDC will always support f32, f64, and I've given long consideration to f80
    on the x86. However, I also plan to incorporate the DD and QD (double-double
    and quad-double) libraries, as well as MPFR's floating point library, so for
    larger precision formats I will probably just use those, incorporating the
    native types f128 and f256.

    Double-double, Quad-double, and MPFR:

    http://crd-legacy.lbl.gov/~dhbailey/mpdist/
    http://www.mpfr.org/

    Eventually I will write a replacement for MPFR using my own algorithms.
    Or I could use s32 and u32 in RDC, or in C using my own typedef'd names.
    I can't do it in Visual Studio 2008 or earlier without manually doing it
    (because Microsoft did not provide those files until Visual Studio 2010),
    which is what I've done. VS2008 and earlier do not have the include file,
    nor the types which equate to those clunky _t names.

    Basically, I did what the C authors did ... which I think was a horrid
    solution as well, by the way, because it requires something that should
    not be anything I ever need to do in a language in the age of 16-bit,
    32-bit, and 64-bit computers running the Windows operating system, which
    is what Visual Studio's C compilers run on. Those types of variables
    should all be native types ... but C doesn't provide them, so they were
    never included until C99, and even then it was only a typedef (same as
    what I did).

    I say ... pathetic that C does not natively define these types as first
    class citizens (without the _t).

    Best regards,
    Rick C. Hodgin
     
    Rick C. Hodgin, Feb 8, 2014
    1. Advertisements

  2. Rick C. Hodgin

    James Kuyper Guest

    Why? I don't need to know any of that information to write my code. When
    I need exactly 32 bits, I don't use int, I use int32_t. I only use int
    when I have to because of an interface requirement (such as fseek()), or
    because I need the equivalent of int_fast16_t. Similarly, short is
    roughly equivalent to int_least16_t, and long is roughly equivalent to
    int_fast32_t. When the requirements for those types meet my needs, I'll
    use the C90 type name, because it requires less typing; in all other
    cases, I'll use the <stdint.h> typedefs (if I can).

    When my code needs to know the size in bytes, it uses sizeof(int); when
    it needs to know the size in bits, I multiply that by CHAR_BIT - but
    that's what my program knows - there's no need for me to know it. I
    happen to know that int is 32 bits on some of the machines where my code
    runs, and 64 bits on some of the others, but I don't make use of that
    information in any way while writing my code. It's correctly written to
    have the same behavior in either case (it would also have the same
    behavior if int were 48 bits, or even 39 bits).

    I know almost nothing about the underlying architecture of the machines
    my program runs on. That's not quite true - twenty years ago, I knew a
    lot about a remote ancestor of those machines, because I had to write
    some assembly code targeting that ancestor. I wouldn't be surprised to
    find that much of what I knew at that time has remained valid - but
    neither would I be surprised to learn that there have been major changes
    - but I don't need to know what those changes are in order to write my
    programs, and in fact I don't know what those changes have been. Yet I'm
    happily writing programs without that knowledge, and they work quite
    well despite that ignorance. That's because the compiler writers know
    the details about how the machine works, saving me from having to know
    them. That's good, because the coding standards I work under prohibit me
    from writing code that depends upon such details.
     
    James Kuyper, Feb 8, 2014
    1. Advertisements

  3. I used that phrase because it conveys what we all use today. It's is
    the only form I'm aware of actually. When I make changes in an IDE for
    certain options, for example, they still relate exactly to what's issued
    on the command line as a real switch.

    In my specification I will probably still refer to it as a command line
    switch, since there is at some point a command that's invoked (to compile),
    and there are switches which are provided to turn things on or off.
    I know.
    They allow syntax to be compiled which operates differently depending on
    the compiler author's whims. I believe there was an example earlier like
    this, where the results of the operation were dependent upon their order:

    int a=5, b;
    b = foo1(&a) + foo2(&a);

    Will foo1() or foo2() be called first?
    Yeah. We're in different camps. I offer what I offer. People will either
    see value in it or not. I'm content either way because I am writing what
    I am writing for God, using the skills He gave me, so that I can give back
    to those people on this Earth who will use these tools.
    The entire world is going to follow the anti-Christ when he comes (all
    except those who are being saved). That doesn't make the vast majority
    of people who follow evil right ... it just makes them followers.

    Following a correct path is a real thing in and of itself. And having
    explicit data types as first class citizens, and being able to rely upon
    an order of computed operation regardless of platform or compiler version,
    is a correct path.

    Best regards,
    Rick C. Hodgin
     
    Rick C. Hodgin, Feb 8, 2014
  4. Rick C. Hodgin

    James Kuyper Guest

    On 02/08/2014 12:18 PM, Rick C. Hodgin wrote:
    ....
    Then you're using the wrong language. C doesn't allow you to specify how
    your instructions are carried out, it only allows you specify the
    desired result. It's deliberately designed to let implementations
    targeting hardware where one way of doing things is too difficult, to
    use a different way instead. All that matters is whether the result
    meets the requirement of the standard, not how that result was achieved.
     
    James Kuyper, Feb 8, 2014
  5. Visual Studio 2008 and earlier does not include that file. However, I
    was able to find a version online that someone wrote to overcome that
    shortcoming in the Microsoft tool chain. I discovered it does exactly
    what I do in my manual version. And mine has the added advantage of
    not requiring another #include file to parse at compile time, making
    turnaround time faster, and the generated program database smaller.
    Minor features to be sure, but real ones nonetheless.
    C doesn't doesn't have fixed types. Not even C99. It has an add-on
    typedef hack which allows a person to use "fixed-size types" within its
    variable-sized arsenal. But as someone else pointed out, there are some
    platforms which may not be able to implement those types because they are
    not supported on the hardware, or in the compiler. In such a case, because
    C does not require that 64-bit forms be supported, for example, now I'm
    writing manual functions to overcome what C is lacking.
    I use the C++ compiler for my C code because it has syntax relaxations.
    I do not use C++ features, apart from those relaxations, and perhaps a
    handful of features that are provided in C++ (such as anonymous unions).
    I began working on my project in the mid-1990s. I have worked on it in
    varying degrees since then, but I have done a major push since 2009.

    stdint.h does not come with the latest version of Visual Studio 2008 I
    personally use. And WinDef.h does not have facilities to access types
    by size, only by name.

    -----
    As for the rest of your post, you are very insulting to me and I do not
    desire to communicate with you further.

    Best regards,
    Rick C. Hodgin
     
    Rick C. Hodgin, Feb 8, 2014
  6. No, the _t is merely a suffix that the designers of <stdint.h> chose to
    use. (C++ has at least one built-in type whose name is a keyword ending
    in "_t".)
    It's not a native type, but neither is it an "extension"; it's a
    standard feature of ISO C that happens to be provided by a header rather
    than by the compiler.

    Actually, let me rephrase that. int32_t is an *alias* for some native
    (compiler-implemented) type.
    That was certainly a reasonable approach at the time (though since
    <stdint.h> was added to the standard in 1999, and had been discussed
    before that, a little research might have saved you some work.). If
    you're still using Visual Studio 2008, then I suppose you'll need to use
    something that provides features similar to those provided by
    <stdint.h>. If not, you can probably just use <stdint.h> exclusively
    and not worry about it. (Or you can continue using your old solution;
    revised versions of the C standard place a strong emphasis on backward
    compatibility.)
    And if you use <stdint.h>, you'll never have to do that tweaking again.
    Your C implementation will already have done it for you.

    [...]
    It doesn't matter whether int32_t is implemented directly by the
    compiler or indirectly as a typedef in a standard header. I'd ask why
    you have this phobia about typedefs, but I'm afraid you'd tell me.
     
    Keith Thompson, Feb 8, 2014
  7. Hence RDC, James.

    Best regards,
    Rick C. Hodgin
     
    Rick C. Hodgin, Feb 8, 2014
  8. It doesn't exist in Visual Studio 2008 or earlier. I have to manually
    download a commensurate file and install it (and test it, because you
    never know with C for sure until you test it, right?), which is basically
    what I did with my own typedefs. I just replaced the C99 clunky naming
    convention typedefs with my own short, elegant forms.
    I prefer to use a certain type, and to know what it is I'm using without
    question. If some bit of code fails for some reason, then I can see that
    in my code it is correct, but that the function I'm using needs a different
    size.
    I think the need for sizeof(int) is representative of the things that
    are wrong with C.
    You will know with RDC, though I doubt you'll ever use it. I'm reminded
    of a quote from the movie "Switching Channels" about a man who worked on
    the newspaper, addressing the program manager of a television news show:

    "I only keep a TV around to entertain the cat."

    Best regards,
    Rick C. Hodgin
     
    Rick C. Hodgin, Feb 8, 2014
  9. I think typedefs are brilliant. I include them in RDC. They have uses
    galore and I use them directly and indirectly in my code.

    I think it's beyond lunacy to typedef variable-sized native types to a
    form that then provides explicit-sized types through typedefs when they
    should be expressly provided for by the language, and not through the
    typedef add-on hack.

    Best regards,
    Rick C. Hodgin
     
    Rick C. Hodgin, Feb 8, 2014
  10. Rick C. Hodgin

    David Brown Guest

    Yes, I get it - if you have overrides like this, you do not have a standard.

    Sometimes I wonder whether or not English is your mother tongue - you
    certainly seem to have different definitions for some words than the
    rest of the world.
    If the programmer writes code that depends on an assumption not
    expressed in the C standards, then he is not writing valid C code. This
    is no different from any other language.

    For many of the implementation-defined points in the C standards, the
    particular choices are fixed by the target platform, which often lets
    you rely on some of these assumptions. And often there are ways to
    check your assumptions or write code without needing them - compilers
    have headers such as <limits.h> so that your code /will/ work with each
    compiler.

    Just because you don't know how to do such programming, does not mean C
    does not support it.
    Are you really trying to say it is "lunacy" to rely on Linux features
    when writing Linux-specific code, or Windows features when writing
    Windows-specific code?
    The huge majority of numbers in most programs are small.

    And if you are writing a program that needs to count 100,000 lines of a
    file, then you can be confident it will be running on a system with at
    least 32-bit ints, and can therefore happily continue using "int".
    Marvellous idea - lets make some fixed standards, then tell all the
    compiler implementers to break those standards in different ways. We
    can even standardise command-line switches used to choose /how/ the
    standards are to be broken. Perhaps we can have another command line
    switch that lets you break the standards for the command line switches.

    Again, you have no concept of how the processor market works, or what
    processors are used.

    If you want to restrict your own language to a particular type of
    processor, that's fine - but you don't get to condemn C for being more
    flexible.
    Again, you have no concept of what you are talking about. Yes, there
    are ARM devices for less than $1 - about $0.50 is the cheapest available
    in very large quantities. They don't have video, networking, etc., -
    for that, you are up to around $15 minimum, including the required
    external chips.

    Putting an 8-bit cpu on a chip, on the other hand, can be done for
    perhaps $0.10. The same price will give you a 4-bit cpu with ROM in a
    bare die, ready to be put into a cheap plastic toy or greeting card.

    Yes, 32-bit cpus (but not 64-bit cpus) are available at incredibly low
    prices. No, they do not come close to competing with 8-bit devices for
    very high volume shipments.

    I will certainly agree that most /developers/ are working with 32-bit
    devices - and most embedded developers that worked with 8-bit or 16-bit
    devices are moving towards 32-bit devices. I use a lot more 32-bit
    chips than I did even 5 years ago - but we certainly have not stopped
    making new systems based on 8-bit and 16-bit devices.

    If I were designing a new language, I would pick 32-bit as the minimum
    size - but I am very glad that C continues to support a wide range of
    devices.
    Exactly - it is nonsense.

    Will your specifications say anything about the timing of particular
    constructs? I expect not - it is not fully specified. In some types of
    work it would be very useful to be able to have guarantees about the
    timing of the code (and there are some languages and toolsets that give
    you such guarantees). For real-time systems, operating within the given
    time constraints is a requirement for the code to work correctly as
    desired - it is part of the behaviour of the code. Will your
    specifications say anything about the sizes of the generated code? I
    expect not - it is not fully specified. Again, there are times when
    code that is too big is non-working code. Will your specifications give
    exact requirements for the results of all floating point operations? I
    expect not - otherwise implementations would require software floating
    point on every processor except the one you happened to test on.


    People who write code like "a = a[i++]" should be fired, regardless
    of what a language standard might say.
    You throw around insults like "insane" and "lunacy", without making any
    attempt to understand the logic behind the C language design decisions
    (despite having them explained to you).
    It's odd that you use the example of computing the sum of two numbers as
    justification for your confused ideas about rigid specifications, when
    it was previously given to you as an example of why the C specifications
    leave things like order of operation up to the implementation - it is
    precisely so that when you write "x + y", you don't care whether "x" or
    "y" is evaluated first, as long as you get their sum. But with your
    "rigid specs" for RDC, you are saying that you /do/ care how it is done,
    by insisting that the compiler first figures out "x", /then/ figures out
    "y", and /then/ adds them.

    Note, of course, that all your attempts to specify things like ordering
    will be in vain - regardless of the ordering generated by the compiler,
    modern processors will re-arrange the order the instructions are carried
    out.
    I am not convinced. I suspect you are a Turing test that has blown a
    fuse :)
    Why? You have absolutely no justification for insisting on 32-bit ints.
    If I am writing code that needs a 32-bit value, I will use int32_t.
    (Some people would prefer to use "long int", which is equally good, but
    I like to be explicit.) There are no advantages in saying that "int"
    needs to be 32-bit, but there are lots of disadvantages (in code size
    and speed).
    This means that the same code has different meanings depending on the
    compiler flags, and the target system. This completely spoils the idea
    of your "rigid specifications" and that "RDC code will run in exactly
    the same way on all systems". It is, to borrow your own phrase, lunacy.
     
    David Brown, Feb 8, 2014
  11. Rick C. Hodgin

    Ian Collins Guest

    In much the same way as C doesn't have I/O except through an add in
    library hack? Come on, get real.
    That's a lot of work. Implementing a 64 bit int on something like an
    8051 isn't too big a job, but adding an 8 bit char on a DSP which can
    only address 16 or 32 bit units is more of a challenge (especially if
    you include pointer arithmetic) and will run like a two legged dog.

    Now if you were using C++, you could create your own types without
    having to write your own compiler. It looks to me that you could create
    your desired language fairly easily as a c++ library..
     
    Ian Collins, Feb 8, 2014
  12. Rick C. Hodgin

    James Kuyper Guest

    On 02/07/2014 05:19 PM, Rick C. Hodgin wrote:
    ....
    Since so many other people have made the same decision deliberately by
    reason of believing it was a good idea, I do consider it unlikely.
    I'm posting to the newsgroup, not to you.

    I'm not quite sure why I've continued to do so, but you'll be pleased to
    know that I've decided to stop.
    I may mention your idiocies from time to time - you make an excellent
    example of how not to think - but I've decided to spare myself the pain
    of reading your future messages.
     
    James Kuyper, Feb 8, 2014
  13. I think you don't know that C has integer types of know widths (if the
    machine has them at all).

    But you have just lost track of what is being discussed: "The cases
    where one needs to know are very rare" refers to what you claimed every
    programmer needs to know which is the bit-size of int. Fixed file
    format are not a counter-example to this.
    No, you have shown that you don't know the language and have tried, not
    very successfully to re-invent the wheel. I would not normally put it
    so harshly, but you are so convinced that you are right about everything
    that it seems the appropriate tone to take.
    And elsewhere in you code base, it seems.
    If you were using modern C that would be true as well. (I see Keith
    Thompson has written more clam reply that gives you the details).

    Just an aside: that's a terrible reason to put efficiency to one aside.
    Computers are always pushed to the limit of what they can do, no matter
    how fast they are. My tablet struggles with many websites. It would
    not be any faster had all the programmers taken care all the way down,
    because the saving would simply mean that people would write more
    complex websites, but that in itself is a gain.

    Yes, but that is an as yet an un-cracked nut, despite decades of
    research. Something tells me, though, that you will have a solution to
    it.

    Yes, but why is it the best tool? It seems unlikely on the face of it.

    <snip>
     
    Ben Bacarisse, Feb 8, 2014
  14. Rick C. Hodgin

    David Brown Guest

    These are not "minor features" - they are irrelevant "features". It
    makes no conceivable difference whether "int32_t" is defined by the
    compiler using a standard header, or defined by the compiler as part of
    the compiler's executable file.

    Some parts of C are defined as part of the core compiler language, while
    other parts are defined as part of the standard library and headers. It
    is all C - and int32_t and friends have been part of the C standards
    since C99.
    These are not "syntax relaxations" - they are differences you get from
    using a different programming language. Some of the language features
    in C99 were copied from C++ (at least roughly), such as "inline", mixing
    declarations and statements, // comments, etc. Sane C programmers use
    modern C toolchains rather than MSVC++, and thus get these features from
    C99. You are programming in C++, but skipping half the language.

    The sizes of the types in WinDef.h are, AFAIK, fixed - even though the
    names don't include the sizes.

    In my early C programming days, I too used my own set of typedefs for
    fixed size types - as did everyone else who needed fixed sizes. I still
    have code that uses them. But I moved over to the <stdint.h> types as
    C99 became common.

    It is fine to keep doing that for long-lived projects, to save changing
    the rest of the code (though personally I find "s32" style names ugly).
    But I would recommend you make your typedefs in terms of <stdint.h>
    types, as those are part of the modern C language - it makes it easier
    to maintain consistency if you change compiler or platform (that's why
    the <stdint.h> types exist).
     
    David Brown, Feb 8, 2014
  15. Rick C. Hodgin

    James Kuyper Guest

    On 02/08/2014 04:49 PM, Ian Collins wrote:
    ....
    You mean, like Dominic? <>. :)
     
    James Kuyper, Feb 8, 2014
  16. But, as you say, you can reply on the specification.
     
    Ben Bacarisse, Feb 8, 2014
  17. Rick C. Hodgin

    BartC Guest

    (I suppose we're all different. In my own syntaxes, I don't have anything
    like typedef for creating aliases at all.

    I have a way of creating actual new types, but aliases for existing types
    are specifically excluded (because they would be an actual new type derived
    from the original, not an alias, causing all sorts of problems).

    When I do sometimes need a simple alias for a type, then I just use a macro,
    which isn't a solution for C, because some complex types require to be
    wrapped around any name they are used to define, you can't just have a type
    name adjacent to a variable name.

    I don't know if your language copies C's convoluted type-specs; that is a
    genuine shortcoming of the language which I suggest you try and avoid.)
    I thought it was odd too. Obviously just bolting on a set of typedefs was
    simpler than adding them properly to the implementation.
     
    BartC, Feb 8, 2014
  18. I used to think the way I/O was handled in C was a hack (back in the 1990s),
    but I no longer think that way.
    RDC defines a particular sequence of processing operations, introduces new
    features for flow control, multi-threading, exception handling, and it has
    some requirements relating its foundational association with the graphical
    IDE, needs relating to edit-and-continue abilities and the debugger, a new
    ABI, and I have a target that RDC will later be compiled within itself.

    Most features of C++ will be dropped in RDC.

    Best regards,
    Rick C. Hodgin
     
    Rick C. Hodgin, Feb 8, 2014
  19. Rick C. Hodgin

    Ian Collins Guest

    Such as being able to define a particular sequence of processing
    operations, multi-threading and exception handling?
     
    Ian Collins, Feb 8, 2014
  20. RDC defines explicitly the order of operation for data processing, logic
    tests, function calls, and passed parameters, allows multiple return
    parameters, the insertion of generic code at any point via something I
    call a cask, and much more. There will never be any ambiguity on what
    is processed and in what order.

    As for the rest, no semicolons required, but they can still be used:

    -----
    Multi-threading:
    in (x) {
    // Do some code in thread x
    } and in (y) {
    // Do some code in thread y
    } and in (z) {
    // Do some code in thread z
    }
    tjoin x, y, z // Wait until x, y, and z threads complete
    // before continuing...

    -----
    Exception handling can be handled using a try..catch like construction,
    or you can insert casks which allow exceptions to be thrown to specific
    destinations depending on where you are in source code. The generic
    (|mera|) cask allows for insertion at any point, to indicate that if an
    exception is thrown at that part of the command, then it will go to the
    indicated destination.

    The flow { } block allows for many more features, and there are a lot of
    other new constructs as well.

    flow {
    // By default, any error will trap to error (like try..catch)
    do_something1() (|mera|flowto error1||) // Will trap to error1 here
    do_something2() (|mera|flowto error2||) // Will trap to error2 here

    subflow error1 {
    }

    subflow error1 {
    }

    error (SError* err) {
    }
    }

    Best regards,
    Rick C. Hodgin
     
    Rick C. Hodgin, Feb 8, 2014
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.