Syntax for union parameter

Discussion in 'C Programming' started by Rick C. Hodgin, Jan 29, 2014.

  1. Rick C. Hodgin

    David Brown Guest

    Yes, I meant that he doesn't /intentionally/ use C++. At best, he uses
    a C++ compiler as "a better C" (for some definition of "better" that
    probably only applies when using MSVS).
    David Brown, Feb 10, 2014
    1. Advertisements

  2. Rick C. Hodgin

    David Brown Guest

    Yes, but I don't! I was about ten when I played with p-code Pascal.

    As another example of the fuzzy boundaries, the format string for
    "printf" is arguably an interpreted mini-language.
    David Brown, Feb 10, 2014
    1. Advertisements

  3. Rick C. Hodgin

    David Brown Guest

    That's what it boils down to - in general, well-written C and a good
    compiler will outperform hand-written assembly, especially on modern
    processors. But there are exceptions - areas where assembly can give
    higher speeds, and where these higher speeds are worth the extra effort.
    A bytecode dispatcher is definitely one such case. (Actually,
    hand-written assembly is probably a poor choice here - something that
    generates specialised assembly is likely to give better results for less

    You have to be very careful, however - even if the same assembly works
    as expected on different x86 cpus, the tuning for maximum speed can be
    very different. It would not be a surprise to see that method "B" above
    was faster than "A" on some cpus. A compiler can tune to different cpus
    using different flags, and can even make multiple different variations
    that are chosen at run-time for optimal speed on the cpu in use at the
    time. This can all be done in assembly too, of course, but the
    development, testing, and maintenance costs go up considerably.
    David Brown, Feb 10, 2014
  4. Rick C. Hodgin

    David Brown Guest

    Agreed. Of course, people also "optimise" their C code (changing array
    operations into pointers "for speed", and so on) - they too are evil.

    I find it useful to examine the generated assembly from time to time.
    Much of my programming is for small processors - sometimes it only takes
    a few changes to make significant differences. But I find that I need
    to do far less "target-optimised" C programming than I did a decade ago
    - partly due to better small microcontrollers, and partly due to better
    compiler tools. I still have some systems where it can make a
    significant difference using something like a "while" loop rather than a
    "for" loop, but thankfully these moments are rare.
    David Brown, Feb 10, 2014
  5. Yes. It has highlighted a bug in the Whole Tomato Visual Assist X tool:

    If you search for "#include" in that forum, you'll find the reference.
    When I renamed the .cpp file I #include to .h, the bug in VAX goes away.

    Best regards,
    Rick C. Hodgin
    Rick C. Hodgin, Feb 10, 2014
  6. The #define _TEST_ME line is commented out.
    It does in Visual Studio. I'll tweak what's missing for GCC and push it
    back up at some point. Thank you for reporting the bug.
    I'm a hard man to know.
    Perhaps. If they were to look at the remainder of my projects, or search
    for the text (including the quote) sha1.cpp", they would find the
    references and see how it's used.
    Yes. It's not C. The extension on sha1.cpp is .cpp.
    Then I wouldn't worry about trying to do so. It would be a waste of time.

    Best regards,
    Rick C. Hodgin
    Rick C. Hodgin, Feb 10, 2014
  7. I don't know, I've never tried it. My concerns were more along the lines
    of multiple installed cross-compilers all based on GCC, for example. I
    have one for x86, one for ARM, one for some other CPU, all running on my
    x86 box. In those cases, the #include files are probably very similar
    (because they are all GCC), but there are likely subtle differences, such
    as where int32_t maps to.

    Picking up the wrong include/ file path in that case may not be such a
    difficult thing to do ... and it would take a bit of tracking down to sort
    out the cause.

    It's never happened to me though (that I remember). But I can see it being
    a possibility.
    If used, header files are necessary for compilation, and compilation is
    necessary for execution, so in that way ... yes.

    Best regards,
    Rick C. Hodgin
    Rick C. Hodgin, Feb 10, 2014
  8. ARM also provides an optional module which executes Java byte codes directly
    in hardware.

    That doesn't change the fact that they are interpreted on all other machines,
    and were always interpreted prior to those specialized chips coming into

    Java's bytecodes were created to run in the Java virtual machine on any
    hardware using the Java program which provides the standardized environment.
    It is very similar to what I'm doing with my virtual machine.

    Best regards,
    Rick C. Hodgin
    Rick C. Hodgin, Feb 10, 2014
  9. Yes. And that's all I was saying.
    I believe GCC also supports the -I command line switch to provide the path
    to include files, and -L for lib.

    Best regards,
    Rick C. Hodgin
    Rick C. Hodgin, Feb 10, 2014
  10. Rick C. Hodgin

    James Kuyper Guest

    On 02/10/2014 07:00 AM, Robert Wessel wrote:
    Even simpler:

    #if INT_MAX + INT_MIN
    James Kuyper, Feb 10, 2014
  11. Rick C. Hodgin

    Kaz Kylheku Guest

    It appears you have a position about a written work (the C standard, in its
    various editions).

    Have you ever seen at least its cover page?

    C has support for integral types of exact sizes. In 1990 it didn't; it was
    added in a newer revision of the C standard in 1999.
    There is a header <stdint.h> which declares various typedefs for them,
    and you can test for their presence with macros.

    If your program requires a 32 bit unsigned integer type, its name is
    uint32_t. If you're concerned that it might not be available, you can test
    for it. (Believe it or not, there exist such computers that don't have a 32
    bit integral type natively: historic systems like like 36 bit IBM mainframes,
    and some DEC PDP models.)

    Programs can be written such that the exact size doesn't matter.
    For instance a library which implements bit vectors can use any unsigned
    integer type for its "cell" size, and adjust its access methods accordingly.
    I use a multi-precision integer library whose basic "radix" can be any one
    of various unsigned integer sizes, chosen at compile time. It will build
    with 16 bit "digits", or 32 bit "digits", or 64 bit "digits".
    In principle, it could work with 36 bit "digits".

    C is defined in such a way that it can be efficiently implemented in
    situations that don't look like a SPARC or x86 box.

    Thompson and Ritchie initially worked on a PDP-7 machine with 18 words.
    To specify sizes like 16 and 8 would ironically make C poorly targettable to
    its birthplace.

    C is not Java; C targets real hardware. If the natural word size on some
    machine is 17 bits, then that can readily be "int", and cleverly written
    programs can take advantage of that while also working on other machines.

    Just because you can't deal with it doesn't mean it's insane or wrong;
    maybe you're just inadequate as a programmer.
    Kaz Kylheku, Feb 10, 2014
  12. Rick C. Hodgin

    David Brown Guest

    I have at least 30 different gcc cross-compilers on my oldest PC
    currently in use, plus 15 or so non-gcc cross-compilers (this is
    counting different versions of tools for the same target - I only have
    about 10 different targets). I have never seen - or even heard of - the
    sorts of problems you are worrying about here. It is only even a
    /possibility/ if you deliberately and intentionally go out of your way
    to cause yourself problems.

    What /is/ a possibility, however, is that you get your IDE pointing to
    the wrong include paths - you often need to do that somewhat manually
    for cross compilers. But that won't affect compilation, and anyway ARM
    and x86 have the same sizes for their integers (assuming you really mean
    x86, and not amd64).
    "Necessary for execution" means that the files are needed at run-time,
    which is not the case for headers (Dr. Nick's question was rhetorical).
    David Brown, Feb 10, 2014
  13. Rick C. Hodgin

    David Brown Guest

    Good explanation - I was a little surprised to see you write "at least".
    That would certainly work, as would James' version, but a nice, obvious
    pre-defined symbol would be clearer. A lot of implementation-defined
    behaviour could be covered this way, with a final "__SANE_ARCHITECTURE"
    symbol being defined for processors with two's complement, 8-bit chars,
    clear endian ordering, etc.

    Incidentally, don't the standards allow two's complement signed integers
    ranging from -INT_MAX to +INT_MAX, with -(INT_MAX + 1) being undefined
    behaviour? I have a hard time imagining such an architecture in practice.
    David Brown, Feb 10, 2014
  14. Rick C. Hodgin

    James Kuyper Guest

    Yes, and my test doesn't handle that issue any better than his does. A
    test that deals with that issue, the possibility of padding bits, and
    the completely lack of restrictions on bit-ordering, gets somewhat
    complicated, and can't be done in the pre-processor. A way of checking
    directly would help.
    James Kuyper, Feb 10, 2014
  15. [...]

    Implementing arithmetic on foreign-endian integers strikes me as
    a waste of time. If I need to read and write big-endian integers
    on a little-endian machine (a common requirement, since network
    protocols typically often big-endian), I can just convert big-
    to little-endian on input and little- to big-endian on output.

    And POSIX provides htonl, htons, ntohl, and ntohs for exactly that
    purpose ("h" for host, "n" for network).

    There are no such functions that will convert between big-endian
    and little-endian on a big-endian system, but such conversions are
    rarely needed (and it's easy enough to roll your own if necessary).
    Keith Thompson, Feb 10, 2014
  16. Fixed. Pushed. Compiles in GCC for x86 on Windows, and VS 2008.

    Please report any additional bugs if you'd like.

    Best regards,
    Rick C. Hodgin
    Rick C. Hodgin, Feb 10, 2014
  17. Yes, the most negative value can be a trap representation. It lets an
    implementation use that value as a distinguished representation; it
    might implicitly initialize all int objects to that value, making
    detection of uninitialized variables easier.

    Making it a trap representation doesn't mean that references to it must
    "trap". You could take an existing implementation, change the
    definition of INT_MIN, document that the representation that would
    otherwise have been INT_MIN is a trap representation, and still have a
    conforming implementation. The fact that operations on that value would
    still "work" is within the bounds of undefined behavior.
    Keith Thompson, Feb 10, 2014
  18. Rick C. Hodgin

    James Kuyper Guest

    True, but what some people are worried about the result of applying
    bit-wise operators to negative values, and neither of those tests covers
    that issue properly. I'd recommend strongly against applying those
    operators to signed values, but not everyone follows that recommendation.
    James Kuyper, Feb 10, 2014
  19. (snip)
    The DEC 36 bit machines are twos complement, the IBM machines
    sign magnitude. Would be nice to have a C compiler for the 7090
    so we could try out sign magnitude arithmetic in 36 bits.

    -- glen
    glen herrmannsfeldt, Feb 10, 2014
  20. What's the problem with testing -1 & 3? You get 3, 2 or 1 depending on
    whether ints are represented using 2's complement, 1's complement or
    Ben Bacarisse, Feb 11, 2014
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.