How do I convert an int to binary form?

Discussion in 'C Programming' started by cedarson@gmail.com, Feb 10, 2006.

  1. Guest

    I am having trouble writing a simple code that will convert an int (
    487 for example ) to binary form just for the sake of printing the
    binary. Can someone please help? Thanks!
    , Feb 10, 2006
    #1
    1. Advertising

  2. osmium Guest

    <> wrote:

    >I am having trouble writing a simple code that will convert an int (
    > 487 for example ) to binary form just for the sake of printing the
    > binary. Can someone please help? Thanks!


    Use the left shift operator and the & operator. Skip the leading zeros and
    when you first encounter a 1 in the leftmost position, print a char '1'.
    Keep shifting and printing '1' or '0' depending on the int.
    osmium, Feb 10, 2006
    #2
    1. Advertising

  3. writes:
    > I am having trouble writing a simple code that will convert an int (
    > 487 for example ) to binary form just for the sake of printing the
    > binary. Can someone please help? Thanks!


    See question 20.10 in the comp.lang.c FAQ, <http://www.c-faq.com/>.

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
    Keith Thompson, Feb 10, 2006
    #3
  4. CBFalconer Guest

    wrote:
    >
    > I am having trouble writing a simple code that will convert an int (
    > 487 for example ) to binary form just for the sake of printing the
    > binary. Can someone please help? Thanks!


    Just this once, provided you read my sig and the URLs there
    referenced.

    #include <stdio.h>

    /* ---------------------- */

    static void putbin(unsigned int i, FILE *f) {

    if (i / 2) putbin(i / 2, f);
    putc((i & 1) + '0', f);
    } /* putbin */

    /* ---------------------- */

    int main(void) {

    putbin( 0, stdout); putc('\n', stdout);
    putbin( 1, stdout); putc('\n', stdout);
    putbin(-1, stdout); putc('\n', stdout);
    putbin( 2, stdout); putc('\n', stdout);
    putbin(23, stdout); putc('\n', stdout);
    putbin(27, stdout); putc('\n', stdout);
    return 0;
    } /* main */

    --
    "If you want to post a followup via groups.google.com, don't use
    the broken "Reply" link at the bottom of the article. Click on
    "show options" at the top of the article, then click on the
    "Reply" at the bottom of the article headers." - Keith Thompson
    More details at: <http://cfaj.freeshell.org/google/>
    Also see <http://www.safalra.com/special/googlegroupsreply/>
    CBFalconer, Feb 10, 2006
    #4
  5. Mike Wahler Guest

    <> wrote in message
    news:...
    >I am having trouble writing a simple code that will convert an int (
    > 487 for example ) to binary form just for the sake of printing the
    > binary. Can someone please help? Thanks!


    Hints:

    12345 % 10 == 5
    12345 / 10 == 1234

    Decimal numbers have base 10
    Binary numbers have base 2

    An array can be traversed either forward
    or backward.

    -Mike
    Mike Wahler, Feb 10, 2006
    #5
  6. Joe Wright Guest

    wrote:
    > I am having trouble writing a simple code that will convert an int (
    > 487 for example ) to binary form just for the sake of printing the
    > binary. Can someone please help? Thanks!
    >

    We're not supposed to do it for you but I'm feeling generous..

    #define CHARBITS 8
    #define SHORTBITS 16
    #define LONGBITS 32
    #define LLONGBITS 64

    typedef unsigned char uchar;
    typedef unsigned short ushort;
    typedef unsigned long ulong;
    typedef unsigned long long ullong;

    void bits(uchar b, int n) {
    for (--n; n >= 0; --n)
    putchar((b & 1 << n) ? '1' : '0');
    putchar(' ');
    }

    void byte(uchar b) {
    bits(b, CHARBITS);
    }

    void word(ushort w) {
    int i;
    for (i = SHORTBITS - CHARBITS; i >= 0; i -= CHARBITS)
    byte(w >> i);
    putchar('\n');
    }

    ...with my compliments.

    --
    Joe Wright
    "Everything should be made as simple as possible, but not simpler."
    --- Albert Einstein ---
    Joe Wright, Feb 10, 2006
    #6
  7. pete Guest

    Joe Wright wrote:
    >
    > wrote:
    > > I am having trouble writing a simple code that will convert an int (
    > > 487 for example ) to binary form just for the sake of printing the
    > > binary. Can someone please help? Thanks!
    > >

    > We're not supposed to do it for you but I'm feeling generous..
    >
    > #define CHARBITS 8


    Why not use CHAR_BIT instead?

    > typedef unsigned char uchar;


    I don't like those kind of typedefs.

    > for (--n; n >= 0; --n)


    My prefered way of writing that is:

    while (n-- > 0)

    --
    pete
    pete, Feb 10, 2006
    #7
  8. Joe Wright Guest

    pete wrote:
    > Joe Wright wrote:
    >
    >> wrote:
    >>
    >>>I am having trouble writing a simple code that will convert an int (
    >>>487 for example ) to binary form just for the sake of printing the
    >>>binary. Can someone please help? Thanks!
    >>>

    >>
    >>We're not supposed to do it for you but I'm feeling generous..
    >>
    >>#define CHARBITS 8

    >
    >
    > Why not use CHAR_BIT instead?
    >

    Why not indeed.
    >
    >>typedef unsigned char uchar;

    >
    >
    > I don't like those kind of typedefs.
    >

    I didn't know that. Sorry. I like it.
    >
    >> for (--n; n >= 0; --n)

    >
    >
    > My prefered way of writing that is:
    >
    > while (n-- > 0)
    >

    Right you are too. I'll try to do it that way from now on.

    --
    Joe Wright
    "Everything should be made as simple as possible, but not simpler."
    --- Albert Einstein ---
    Joe Wright, Feb 10, 2006
    #8
  9. Joe Wright <> writes:
    > wrote:
    >> I am having trouble writing a simple code that will convert an int (
    >> 487 for example ) to binary form just for the sake of printing the
    >> binary. Can someone please help? Thanks!
    >>

    > We're not supposed to do it for you but I'm feeling generous..
    >
    > #define CHARBITS 8
    > #define SHORTBITS 16
    > #define LONGBITS 32
    > #define LLONGBITS 64
    >
    > typedef unsigned char uchar;
    > typedef unsigned short ushort;
    > typedef unsigned long ulong;
    > typedef unsigned long long ullong;


    If your CHARBITS, SHORTBITS, LONGBITS, and LLONGBITS are intended to
    represent the minimum guaranteed number of bits, they're reasonable,
    but you really need to document your intent. If they're intended to
    represent the actual sizes of the types, the values you use are wrong
    on some systems. You should use CHAR_BIT from <limits.h>; for the
    others, you can use constructs like "CHAR_BIT * sizeof(short)" if you
    don't mind your code breaking on a system that uses padding bits.

    Typedefs like "uchar" for unsigned char are useless. Just use
    "unsigned char" directly, so your readers don't have to guess what
    "uchar" means. Likewise for the others. If you think there's some
    value in saving keystrokes, use an editor that supports editor macros.

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
    Keith Thompson, Feb 10, 2006
    #9
  10. Joe Wright Guest

    Keith Thompson wrote:
    > Joe Wright <> writes:
    >
    >> wrote:
    >>
    >>>I am having trouble writing a simple code that will convert an int (
    >>>487 for example ) to binary form just for the sake of printing the
    >>>binary. Can someone please help? Thanks!
    >>>

    >>
    >>We're not supposed to do it for you but I'm feeling generous..
    >>
    >>#define CHARBITS 8
    >>#define SHORTBITS 16
    >>#define LONGBITS 32
    >>#define LLONGBITS 64
    >>
    >>typedef unsigned char uchar;
    >>typedef unsigned short ushort;
    >>typedef unsigned long ulong;
    >>typedef unsigned long long ullong;

    >
    >
    > If your CHARBITS, SHORTBITS, LONGBITS, and LLONGBITS are intended to
    > represent the minimum guaranteed number of bits, they're reasonable,
    > but you really need to document your intent. If they're intended to
    > represent the actual sizes of the types, the values you use are wrong
    > on some systems. You should use CHAR_BIT from <limits.h>; for the
    > others, you can use constructs like "CHAR_BIT * sizeof(short)" if you
    > don't mind your code breaking on a system that uses padding bits.
    >

    Can you give a real example of "CHAR_BIT * sizeof (short)" (16 on my Sun
    and x86 boxes) breaking? Do any modern CPU architectures have padding
    bits in short, int, or long objects? Which?

    > Typedefs like "uchar" for unsigned char are useless. Just use
    > "unsigned char" directly, so your readers don't have to guess what
    > "uchar" means. Likewise for the others. If you think there's some
    > value in saving keystrokes, use an editor that supports editor macros.
    >

    Taste? Everyone I know, seeing "uchar" in type context will read
    "unsigned char". You?

    --
    Joe Wright
    "Everything should be made as simple as possible, but not simpler."
    --- Albert Einstein ---
    Joe Wright, Feb 11, 2006
    #10
  11. pete Guest

    Joe Wright wrote:

    > Taste? Everyone I know, seeing "uchar" in type context will read
    > "unsigned char".


    If you're debugging or otherwise modifying
    or maintaining code that has "uchar" in it,
    then it is something that needs to be looked up.

    > You?


    ITYM "U?".

    --
    pete
    pete, Feb 11, 2006
    #11
  12. Joe Wright Guest

    pete wrote:
    > Joe Wright wrote:
    >
    >
    >>Taste? Everyone I know, seeing "uchar" in type context will read
    >>"unsigned char".

    >
    >
    > If you're debugging or otherwise modifying
    > or maintaining code that has "uchar" in it,
    > then it is something that needs to be looked up.
    >

    So? Look it up.
    >
    >>You?

    >
    >
    > ITYM "U?".
    >

    Why would you think I meant "U?"?

    I don't post often. When I do it is usually in response to a "How can I
    do this?" question. I usually respond with a program example of how to
    do it. To the extent that I understand the question, the programs I post
    are correct. That you and at least one other respond with "I wouldn't do
    it that way" seems odd to me. Why would pete tell the World that Joe
    Wright uses "typedef unsigned char uchar;" and shouldn't because pete
    doesn't like it?

    Every now and then Chuck F. and I get into a code contest, not so much
    about errors but how to do it better. I enjoy that. Chuck is good.

    If you want a coding competition, I'm your man. You post something for
    me to improve or I'll post something. We'll tease it until it's perfect
    for all clc to see. That will be fun. We'll try to keep it instructional
    so that our audience might learn something.

    Do you like it? Do you want me to start, or will you?

    --
    Joe Wright
    "Everything should be made as simple as possible, but not simpler."
    --- Albert Einstein ---
    Joe Wright, Feb 11, 2006
    #12
  13. Joe Wright <> writes:
    > Keith Thompson wrote:
    >> Joe Wright <> writes:
    >>> wrote:
    >>>>I am having trouble writing a simple code that will convert an int (
    >>>>487 for example ) to binary form just for the sake of printing the
    >>>>binary. Can someone please help? Thanks!
    >>>
    >>>We're not supposed to do it for you but I'm feeling generous..
    >>>
    >>>#define CHARBITS 8
    >>>#define SHORTBITS 16
    >>>#define LONGBITS 32
    >>>#define LLONGBITS 64
    >>>
    >>>typedef unsigned char uchar;
    >>>typedef unsigned short ushort;
    >>>typedef unsigned long ulong;
    >>>typedef unsigned long long ullong;

    >> If your CHARBITS, SHORTBITS, LONGBITS, and LLONGBITS are intended to
    >> represent the minimum guaranteed number of bits, they're reasonable,
    >> but you really need to document your intent. If they're intended to
    >> represent the actual sizes of the types, the values you use are wrong
    >> on some systems. You should use CHAR_BIT from <limits.h>; for the
    >> others, you can use constructs like "CHAR_BIT * sizeof(short)" if you
    >> don't mind your code breaking on a system that uses padding bits.
    >>

    > Can you give a real example of "CHAR_BIT * sizeof (short)" (16 on my
    > Sun and x86 boxes) breaking? Do any modern CPU architectures have
    > padding bits in short, int, or long objects? Which?


    Yes. I have an account on an old Cray Y-MP vector system (admittedly
    not a modern machine). It has:

    CHAR_BIT == 8
    sizeof(short) == 8 (64 bits)
    SHRT_MIN == -2147483648
    SHRT_MAX == 2147483647
    USHRT_MAX == 4294967295

    So short and unsigned short have 32 padding bits.

    I don't know whether the same is true on the more modern Cray vector
    systems, but I wouldn't bet against it. If you want to write code
    that runs on, say, all Unix-like systems, you need to take this into
    account (for example, the system in question has Perl 5.6.0, a large
    program implemented in C).

    If you don't care about portability to such systems, you might consider
    arranging for your code to make some sanity checks and abort if they
    fail.

    >> Typedefs like "uchar" for unsigned char are useless. Just use
    >> "unsigned char" directly, so your readers don't have to guess what
    >> "uchar" means. Likewise for the others. If you think there's some
    >> value in saving keystrokes, use an editor that supports editor macros.
    >>

    > Taste? Everyone I know, seeing "uchar" in type context will read
    > "unsigned char". You?


    Sure, if I see "uchar" I'll *assume* that it's a typedef for "unsigned
    char" -- but I won't be 100% sure until I check the declaration. A
    programmer might decide that he wants to use 16-bit quantities rather
    than 8-bit quantities, and that the easiest way to do it is to change
    "typedef unsigned char uchar;" to "typedef unsigned short uchar;".
    I'm not saying you'd do such a thing, but I can't be sure that
    J. Random Programmer wouldn't.

    If you want unsigned char, use unsigned char. If you want a typedef
    that might change, use a descriptive name that doesn't imply one
    specific predefined type.

    Using "unsigned char" rather than "uchar" can avoid confusion. Do you
    have an argument in favor of using "uchar"?

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
    Keith Thompson, Feb 11, 2006
    #13
  14. Joe Wright <> writes:
    > pete wrote:
    >> Joe Wright wrote:
    >>
    >>>Taste? Everyone I know, seeing "uchar" in type context will read
    >>>"unsigned char".

    >> If you're debugging or otherwise modifying
    >> or maintaining code that has "uchar" in it,
    >> then it is something that needs to be looked up.
    >>

    > So? Look it up.


    If you use "unsigned char", nobody will have to look it up.

    >>>You?

    >> ITYM "U?".
    >>

    > Why would you think I meant "U?"?


    "U" : "You" :: "uchar" : "unsigned char"

    To expand that a bit, both "U" and "uchar" are abbreviations that make
    things more difficult for readers with no real benefit.

    > I don't post often. When I do it is usually in response to a "How can
    > I do this?" question. I usually respond with a program example of how
    > to do it. To the extent that I understand the question, the programs I
    > post are correct. That you and at least one other respond with "I
    > wouldn't do it that way" seems odd to me. Why would pete tell the
    > World that Joe Wright uses "typedef unsigned char uchar;" and
    > shouldn't because pete doesn't like it?

    [...]

    Presumably because there are valid and objective reasons to prefer
    "unsigned char" over "uchar".

    BTW, none of this has anything to do with the fact that Joe Wright was
    the one who posted this. We're discussing C; don't take it personally.

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
    Keith Thompson, Feb 11, 2006
    #14
  15. On Sat, 11 Feb 2006 21:43:54 +0000, Keith Thompson wrote:

    > Joe Wright <> writes:
    >> pete wrote:
    >>> Joe Wright wrote:
    >>>
    >>>>Taste? Everyone I know, seeing "uchar" in type context will read
    >>>>"unsigned char".
    >>> If you're debugging or otherwise modifying or maintaining code that has
    >>> "uchar" in it, then it is something that needs to be looked up.
    >>>

    >> So? Look it up.

    >
    > If you use "unsigned char", nobody will have to look it up.


    I head agrees with you, but I must confess to guiltily writing such
    typedefs when I think no one will see them. I offer this in defense.

    I think of things the affect a program's understandability (bad word but
    readability is too superfical) as falling into three categories:

    (a) There are superficial things like layout, brace and bracket placement,
    naming conventions etc. and, yes, if main is declared correctly. These
    form a sort of "constant" order complexity (O(1)) in understanding because
    no matter how bad, once every single style you can think of has been
    abused, that's it. It can't get any worse.

    (b) Things like unusual patterns of #include, not using #include "guards",
    chains of "shorthand" typedefs and so on. The effect of these on
    understandability is, in theory, unbounded but it does not take much
    intellectual effort to unravel. These are O(n) complexity issues.

    (c) The Really Bad Ones. Pretty much all the hard problems that come from
    poor memory management, illogical design, obscure control flow and so on
    are much worse than anything that comes from (a) or (b). These can make
    for exponential compexity in understanding.

    There are exceptions, of couse. I think some typedefs help readability
    (pointer to function types spring to mind) and bad macros can make things
    unreadable faster than almost anything else, but because type (c) problems
    are hard to discuss in general terms, types (a) and (b) get too much blame
    for the harm they can cause.

    --
    Ben.
    Ben Bacarisse, Feb 12, 2006
    #15
  16. pete Guest

    Keith Thompson wrote:
    >
    > Joe Wright <> writes:
    > > pete wrote:
    > >> Joe Wright wrote:
    > >>
    > >>>Taste? Everyone I know, seeing "uchar" in type context will read
    > >>>"unsigned char".
    > >> If you're debugging or otherwise modifying
    > >> or maintaining code that has "uchar" in it,
    > >> then it is something that needs to be looked up.
    > >>

    > > So? Look it up.


    I can do extra work if I have to, I just don't like having to.
    Deliberately using a coding style which creates
    extra work in reading the code, makes no sense to me.

    > If you use "unsigned char", nobody will have to look it up.
    >
    > >>>You?
    > >> ITYM "U?".
    > >>

    > > Why would you think I meant "U?"?

    >
    > "U" : "You" :: "uchar" : "unsigned char"


    Thank you Keith Thompson.


    > To expand that a bit, both "U" and "uchar" are abbreviations that make
    > things more difficult for readers with no real benefit.


    > > That you and at least one other respond with "I
    > > wouldn't do it that way" seems odd to me. Why would pete tell the
    > > World that Joe Wright uses "typedef unsigned char uchar;" and
    > > shouldn't because pete doesn't like it?


    I use this forum to discuss C.
    C coding style is on topic.

    > Presumably because there are valid and objective reasons to prefer
    > "unsigned char" over "uchar".
    >
    > BTW, none of this has anything to do with the fact that Joe Wright was
    > the one who posted this.
    > We're discussing C; don't take it personally.


    That's what I think.

    --
    pete
    pete, Feb 12, 2006
    #16
  17. Joe Wright Guest

    Keith Thompson wrote:
    > Joe Wright <> writes:
    >
    >>Keith Thompson wrote:
    >>
    >>>Joe Wright <> writes:
    >>>
    >>>> wrote:
    >>>>
    >>>>>I am having trouble writing a simple code that will convert an int (
    >>>>>487 for example ) to binary form just for the sake of printing the
    >>>>>binary. Can someone please help? Thanks!
    >>>>
    >>>>We're not supposed to do it for you but I'm feeling generous..
    >>>>
    >>>>#define CHARBITS 8
    >>>>#define SHORTBITS 16
    >>>>#define LONGBITS 32
    >>>>#define LLONGBITS 64
    >>>>
    >>>>typedef unsigned char uchar;
    >>>>typedef unsigned short ushort;
    >>>>typedef unsigned long ulong;
    >>>>typedef unsigned long long ullong;
    >>>
    >>>If your CHARBITS, SHORTBITS, LONGBITS, and LLONGBITS are intended to
    >>>represent the minimum guaranteed number of bits, they're reasonable,
    >>>but you really need to document your intent. If they're intended to
    >>>represent the actual sizes of the types, the values you use are wrong
    >>>on some systems. You should use CHAR_BIT from <limits.h>; for the
    >>>others, you can use constructs like "CHAR_BIT * sizeof(short)" if you
    >>>don't mind your code breaking on a system that uses padding bits.
    >>>

    >>
    >>Can you give a real example of "CHAR_BIT * sizeof (short)" (16 on my
    >>Sun and x86 boxes) breaking? Do any modern CPU architectures have
    >>padding bits in short, int, or long objects? Which?

    >
    >
    > Yes. I have an account on an old Cray Y-MP vector system (admittedly
    > not a modern machine). It has:
    >
    > CHAR_BIT == 8
    > sizeof(short) == 8 (64 bits)
    > SHRT_MIN == -2147483648
    > SHRT_MAX == 2147483647
    > USHRT_MAX == 4294967295
    >
    > So short and unsigned short have 32 padding bits.
    >
    > I don't know whether the same is true on the more modern Cray vector
    > systems, but I wouldn't bet against it. If you want to write code
    > that runs on, say, all Unix-like systems, you need to take this into
    > account (for example, the system in question has Perl 5.6.0, a large
    > program implemented in C).
    >
    > If you don't care about portability to such systems, you might consider
    > arranging for your code to make some sanity checks and abort if they
    > fail.
    >
    >
    >>>Typedefs like "uchar" for unsigned char are useless. Just use
    >>>"unsigned char" directly, so your readers don't have to guess what
    >>>"uchar" means. Likewise for the others. If you think there's some
    >>>value in saving keystrokes, use an editor that supports editor macros.
    >>>

    >>
    >>Taste? Everyone I know, seeing "uchar" in type context will read
    >>"unsigned char". You?

    >
    >
    > Sure, if I see "uchar" I'll *assume* that it's a typedef for "unsigned
    > char" -- but I won't be 100% sure until I check the declaration. A
    > programmer might decide that he wants to use 16-bit quantities rather
    > than 8-bit quantities, and that the easiest way to do it is to change
    > "typedef unsigned char uchar;" to "typedef unsigned short uchar;".
    > I'm not saying you'd do such a thing, but I can't be sure that
    > J. Random Programmer wouldn't.
    >
    > If you want unsigned char, use unsigned char. If you want a typedef
    > that might change, use a descriptive name that doesn't imply one
    > specific predefined type.
    >
    > Using "unsigned char" rather than "uchar" can avoid confusion. Do you
    > have an argument in favor of using "uchar"?
    >

    I used "uchar" and the others, especially "ullong" to save keystrokes
    and to improve readability. And I think it does that.

    That you don't think so is neither here nor there. I do not post here to
    instruct people how they must do something, but how they might do it.

    I take my C programming very seriously. If I post code here which is
    'wrong' I will appreciate very much being corrected.

    I know I seem to be taking all this a little too personally. Perhaps a
    persecution complex. I'll get over it.

    --
    Joe Wright
    "Everything should be made as simple as possible, but not simpler."
    --- Albert Einstein ---
    Joe Wright, Feb 12, 2006
    #17
  18. pete Guest

    Joe Wright wrote:
    >
    > Keith Thompson wrote:


    > I used "uchar" and the others, especially "ullong" to save keystrokes
    > and to improve readability. And I think it does that.


    Saving keystrokes is OK.
    The point of contention is readability.

    > That you don't think so is neither here nor there.
    > I do not post here to
    > instruct people how they must do something, but how they might do it.


    The best way to write code, is very on topic,
    regardless of whether or not there actually is a best way.

    > >> for (--n; n >= 0; --n)

    > >
    > >
    > > My prefered way of writing that is:
    > >
    > > while (n-- > 0)
    > >

    > Right you are too. I'll try to do it that way from now on.


    My real prefered way of doing that
    is to have n be an unsigned integer type
    and to use the inequality operator,
    but I didn't want to be too pushy.

    I've promoted that way of looping through an array
    here on more than one occassion.

    http://groups.google.com/group/comp.lang.c/msg/c0103a58a6d6e4e0

    The first time that our friend CBFalconer
    noticed my (n-- != 0) stepping through an array,
    as I recall, he really didn't like it.
    I can't recall any words from that thread to google on though.

    It took him a while to get used to it.

    http://groups.google.com/group/comp.lang.c/msg/76918442af5e6884

    In the above post,
    Lawrence Kirby said that the (n-- != 0) way,
    wasn't his favorite.
    I didn't have anything else to say
    that I hadn't already mentioned elsethread,
    so I didn't reply to it.
    Even though I can't claim that the method is indisputabley the best,
    I can still discuss it and say why *I* think it is.

    By the time that the "Implementing my own memcpy" thread
    came up, CBFalconer had come around.

    http://groups.google.com/group/comp.lang.c/msg/758f034e126b05cb

    --
    pete
    pete, Feb 12, 2006
    #18
  19. CBFalconer Guest

    pete wrote:
    >

    .... snip ...
    >
    > The first time that our friend CBFalconer
    > noticed my (n-- != 0) stepping through an array,
    > as I recall, he really didn't like it.
    > I can't recall any words from that thread to google on though.
    >
    > It took him a while to get used to it.


    I don't recall that. I would be more likely to omit the "!= 0"
    portion though. If n was a pointer I still object.

    .... snip ...
    >
    > By the time that the "Implementing my own memcpy" thread
    > came up, CBFalconer had come around.
    >
    > http://groups.google.com/group/comp.lang.c/msg/758f034e126b05cb


    I'm offline, so can't use that to refresh any memories. I never
    "come around". I may occasionally expand my horizons. Extreme
    crusty dogmatism is the watchword here.

    --
    "If you want to post a followup via groups.google.com, don't use
    the broken "Reply" link at the bottom of the article. Click on
    "show options" at the top of the article, then click on the
    "Reply" at the bottom of the article headers." - Keith Thompson
    More details at: <http://cfaj.freeshell.org/google/>
    Also see <http://www.safalra.com/special/googlegroupsreply/>
    CBFalconer, Feb 12, 2006
    #19
  20. Joe Wright <> writes:
    [...]
    > I used "uchar" and the others, especially "ullong" to save keystrokes
    > and to improve readability. And I think it does that.


    Saving keystrokes isn't much of a virtue. If I thought it actually
    improved readability, I'd agree that it's a good idea -- just as I
    wouldn't mind people writing "u" for "you" if it actually improved
    readability.

    > That you don't think so is neither here nor there. I do not post here
    > to instruct people how they must do something, but how they might do
    > it.


    And I haven't said that your code is incorrect, merely that its style
    makes it more difficult to read. In this particular case (unlike some
    other style points) I happen to have some reasonably objective
    arguments to back up my opinion; I won't repeat them here.

    > I take my C programming very seriously. If I post code here which is
    > 'wrong' I will appreciate very much being corrected.


    Of course, but correctness isn't the only criterion for good code,
    especially for code posted here. Code is read more often than it's
    written, and *if* pete and I are typical, you might consider adjusting
    your style to something that's more easily read.

    > I know I seem to be taking all this a little too personally. Perhaps a
    > persecution complex. I'll get over it.


    Good (seriously). And let me say one more time that there's
    absolutely nothing personal in any of this.

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
    Keith Thompson, Feb 12, 2006
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Schnoffos
    Replies:
    2
    Views:
    1,210
    Martien Verbruggen
    Jun 27, 2003
  2. Hal Styli
    Replies:
    14
    Views:
    1,632
    Old Wolf
    Jan 20, 2004
  3. arun
    Replies:
    8
    Views:
    453
    Dave Thompson
    Jul 31, 2006
  4. aling
    Replies:
    8
    Views:
    946
    Jim Langston
    Oct 20, 2005
  5. Replies:
    9
    Views:
    430
    James Kanze
    Apr 17, 2007
Loading...

Share This Page