print binary representation

Discussion in 'C Programming' started by Carramba, Mar 24, 2007.

  1. Carramba

    Carramba Guest

    Hi!

    How can I output value of char or int in binary form with printf(); ?

    thanx in advance
     
    Carramba, Mar 24, 2007
    #1
    1. Advertising

  2. Carramba wrote:
    > Hi!
    >
    > How can I output value of char or int in binary form with printf(); ?
    >
    > thanx in advance


    There is no standard format specifier for binary form. You will have
    to do the conversion manually, testing each bit from highest to
    lowest, printing '0' if it's not set, and '1' if it is.
     
    =?utf-8?B?SGFyYWxkIHZhbiBExLNr?=, Mar 24, 2007
    #2
    1. Advertising

  3. Carramba

    jaysome Guest

    On Sat, 24 Mar 2007 08:55:36 +0100, Carramba <> wrote:

    >Hi!
    >
    >How can I output value of char or int in binary form with printf(); ?
    >
    >thanx in advance


    The C Standards do not define a conversion specifier for printf() to
    output in binary. The only portable way to do this is to roll your
    own. Here's a start:

    printf("%s\n", int_to_binary_string(my_int));

    Make sure when you implement int_to_binary_string() that it works with
    most desktop targets where sizeof(int) * CHAR_BIT = 32 as well on many
    embedded targets where sizeof(int) * CHAR_BIT = 16.

    Best regards
    --
    jay
     
    jaysome, Mar 24, 2007
    #3
  4. Beej Jorgensen, Mar 24, 2007
    #4
  5. "Carramba" <> wrote in message
    news:4604d5ca$0$488$...
    > Hi!
    >
    > How can I output value of char or int in binary form with printf(); ?
    >
    > thanx in advance

    #include <limits.h>
    /*
    convert machine number to human-readable binary string.
    Returns: pointer to static string overwritten with each call.
    */
    char *itob(int x)
    {
    static char buff[sizeof(int) * CHAR_BIT + 1];
    int i;
    int j = sizeof(int) * CHAR_BIT - 1;

    buff[j] = 0;
    for(i=0;i<sizeof(int) * CHAR_BIT; i++)
    {
    if(x & (1 << i))
    buff[j] = '1';
    else
    buff[j] = '0';
    j--;
    }
    return buff;
    }

    Call

    int x = 100;
    printf("%s", itob(x));

    You might want something more elaborate to cut leading zeroes or handle
    negative numbers.
     
    Malcolm McLean, Mar 24, 2007
    #5
  6. Carramba

    Carramba Guest

    Harald van Dijk wrote:
    > Carramba wrote:
    >> Hi!
    >>
    >> How can I output value of char or int in binary form with printf(); ?
    >>
    >> thanx in advance

    >
    > There is no standard format specifier for binary form. You will have
    > to do the conversion manually, testing each bit from highest to
    > lowest, printing '0' if it's not set, and '1' if it is.
    >

    thanx, maybe you have so suggestion or link for further reading on how
    to do it?
     
    Carramba, Mar 24, 2007
    #6
  7. Carramba

    Carramba Guest

    thanx ! have few questions about this code :)
    Malcolm McLean wrote:
    >
    > "Carramba" <> wrote in message
    > news:4604d5ca$0$488$...
    >> Hi!
    >>
    >> How can I output value of char or int in binary form with printf(); ?
    >>
    >> thanx in advance

    > #include <limits.h>
    > /*
    > convert machine number to human-readable binary string.
    > Returns: pointer to static string overwritten with each call.
    > */
    > char *itob(int x)
    > {
    > static char buff[sizeof(int) * CHAR_BIT + 1];

    why sizeof(int) * CHAR_BIT + 1 ? what does it mean?
    > int i;
    > int j = sizeof(int) * CHAR_BIT - 1;

    why sizeof(int) * CHAR_BIT - 1 ? what does it mean?
    >
    > buff[j] = 0;
    > for(i=0;i<sizeof(int) * CHAR_BIT; i++)
    > {
    > if(x & (1 << i))
    > buff[j] = '1';
    > else
    > buff[j] = '0';
    > j--;
    > }
    > return buff;
    > }
    >
    > Call
    >
    > int x = 100;
    > printf("%s", itob(x));
    >
    > You might want something more elaborate to cut leading zeroes or handle
    > negative numbers.
     
    Carramba, Mar 24, 2007
    #7
  8. Carramba wrote:
    > Harald van Dijk wrote:
    > > Carramba wrote:
    > >> Hi!
    > >>
    > >> How can I output value of char or int in binary form with printf(); ?
    > >>
    > >> thanx in advance

    > >
    > > There is no standard format specifier for binary form. You will have
    > > to do the conversion manually, testing each bit from highest to
    > > lowest, printing '0' if it's not set, and '1' if it is.
    > >

    > thanx, maybe you have so suggestion or link for further reading on how
    > to do it?


    Others have given code already, but here's mine anyway:

    #include <limits.h>
    #include <stdio.h>

    void print_char_binary(char val)
    {
    char mask;

    if(CHAR_MIN < 0)
    {
    if(val < 0
    || val == 0 && val & CHAR_MAX)
    putchar('1');
    else
    putchar('0');
    }

    for(mask = (CHAR_MAX >> 1) + 1; mask != 0; mask >>= 1)
    if(val & mask)
    putchar('1');
    else
    putchar('0');
    }

    void print_int_binary(int val)
    {
    int mask;

    if(val < 0
    || val == 0 && val & INT_MAX)
    putchar('1');
    else
    putchar('0');

    for(mask = (INT_MAX >> 1) + 1; mask != 0; mask >>= 1)
    if(val & mask)
    putchar('1');
    else
    putchar('0');
    }
     
    =?utf-8?B?SGFyYWxkIHZhbiBExLNr?=, Mar 24, 2007
    #8
  9. Carramba

    pete Guest

    Carramba wrote:
    >
    > Hi!
    >
    > How can I output value of char or int in binary form with printf(); ?
    >
    > thanx in advance


    /* BEGIN output from new.c */

    1 = 00000001
    2 = 00000010
    3 = 00000011
    4 = 00000100
    5 = 00000101
    6 = 00000110
    7 = 00000111
    8 = 00001000
    9 = 00001001
    10 = 00001010
    11 = 00001011
    12 = 00001100
    13 = 00001101
    14 = 00001110
    15 = 00001111
    16 = 00010000
    17 = 00010001
    18 = 00010010
    19 = 00010011
    20 = 00010100

    /* END output from new.c */

    /* BEGIN new.c */

    #include <limits.h>
    #include <stdio.h>

    #define STRING "%2d = %s\n"
    #define E_TYPE char
    #define P_TYPE int
    #define INITIAL 1
    #define FINAL 20
    #define INC(E) (++(E))

    typedef E_TYPE e_type;
    typedef P_TYPE p_type;

    void bitstr(char *str, const void *obj, size_t n);

    int main(void)
    {
    e_type e;
    char ebits[CHAR_BIT * sizeof e + 1];

    puts("\n/* BEGIN output from new.c */\n");
    e = INITIAL;
    bitstr(ebits, &e, sizeof e);
    printf(STRING, (p_type)e, ebits);
    while (FINAL > e) {
    INC(e);
    bitstr(ebits, &e, sizeof e);
    printf(STRING, (p_type)e, ebits);
    }
    puts("\n/* END output from new.c */");
    return 0;
    }

    void bitstr(char *str, const void *obj, size_t n)
    {
    unsigned mask;
    const unsigned char *byte = obj;

    while (n-- != 0) {
    mask = ((unsigned char)-1 >> 1) + 1;
    do {
    *str++ = (char)(mask & byte[n] ? '1' : '0');
    mask >>= 1;
    } while (mask != 0);
    }
    *str = '\0';
    }

    /* END new.c */


    --
    pete
     
    pete, Mar 24, 2007
    #9
  10. "Carramba" <> schrieb im Newsbeitrag
    news:46050ac1$0$492$...
    > thanx ! have few questions about this code :)
    > Malcolm McLean wrote:
    >>
    >> "Carramba" <> wrote in message
    >> news:4604d5ca$0$488$...
    >>> Hi!
    >>>
    >>> How can I output value of char or int in binary form with printf(); ?
    >>>
    >>> thanx in advance

    >> #include <limits.h>
    >> /*
    >> convert machine number to human-readable binary string.
    >> Returns: pointer to static string overwritten with each call.
    >> */
    >> char *itob(int x)
    >> {
    >> static char buff[sizeof(int) * CHAR_BIT + 1];

    > why sizeof(int) * CHAR_BIT + 1 ? what does it mean?

    If you want to put an in't binary representation ionto a string you need
    that much space.
    On many implementations sizeof(int) is 4 and CHAR_BIT is 8, so you'd need an
    array of 33 chars (including the teminating null byte).

    I'd use sizeof(x) instead of sizeof(int), that way you can easily change the
    function to work on e.g. long long

    >> int i;
    >> int j = sizeof(int) * CHAR_BIT - 1;

    > why sizeof(int) * CHAR_BIT - 1 ? what does it mean?

    Arrays count from 0 to n and the terminating null byte isn't needed , so the
    index goes from 0 to 31 (assuming the same sizes as above)

    >> buff[j] = 0;
    >> for(i=0;i<sizeof(int) * CHAR_BIT; i++)
    >> {
    >> if(x & (1 << i))
    >> buff[j] = '1';
    >> else
    >> buff[j] = '0';
    >> j--;
    >> }
    >> return buff;
    >> }
    >>
    >> Call
    >>
    >> int x = 100;
    >> printf("%s", itob(x));
    >>
    >> You might want something more elaborate to cut leading zeroes or handle
    >> negative numbers


    Bye, Jojo.
     
    Joachim Schmitz, Mar 24, 2007
    #10
  11. Carramba

    pete Guest

    Malcolm McLean wrote:

    > for(i=0;i<sizeof(int) * CHAR_BIT; i++)
    > {
    > if(x & (1 << i))


    There are some problems with that shift expression.

    (1 << sizeof(int) * CHAR_BIT - 1) is undefined.

    --
    pete
     
    pete, Mar 24, 2007
    #11
  12. "pete" <> wrote in message
    > Malcolm McLean wrote:
    >
    >> for(i=0;i<sizeof(int) * CHAR_BIT; i++)
    >> {
    >> if(x & (1 << i))

    >
    > There are some problems with that shift expression.
    >
    > (1 << sizeof(int) * CHAR_BIT - 1) is undefined.
    >

    The function should take an unsigned int. However I didn't want to add that
    complication for the OP. It should work OK on almost every platform.
    --
    Free games and programming goodies.
    http://www.personal.leeds.ac.uk/~bgy1mm
     
    Malcolm McLean, Mar 24, 2007
    #12
  13. Carramba

    pete Guest

    Malcolm McLean wrote:
    >
    > "pete" <> wrote in message


    > > (1 << sizeof(int) * CHAR_BIT - 1) is undefined.
    > >

    > The function should take an unsigned int.


    That makes no difference.
    The evaluation of (1 << sizeof(int) * CHAR_BIT - 1)
    in a program is always undefined,
    and prevents a program from being a "correct program".
    (1 << sizeof(int) * CHAR_BIT - 1) can't be a positive value.

    > However I didn't want to add that
    > complication for the OP.


    That expression would be perfect to use as
    an example of how not to write code.

    > It should work OK on almost every platform.


    (1u << sizeof(int) * CHAR_BIT - 1) is defined.

    Your initial value of j is also wrong:

    int j = sizeof(int) * CHAR_BIT - 1;

    buff[j] = 0;
    for(i=0;i<sizeof(int) * CHAR_BIT; i++)
    {
    if(x & (1 << i))
    buff[j] = '1';
    else
    buff[j] = '0';

    As you can see in your code above,
    the first side effect of the for loop,
    is to overwrite the null terminator.


    /* BEGIN new.c */

    #include <stdio.h>
    #include <limits.h>

    char *itob(unsigned x);

    int main(void)
    {
    printf("%s\n", itob(100));
    return 0;
    }

    char *itob(unsigned x)
    {
    unsigned i;
    unsigned j;
    static char buff[sizeof x * CHAR_BIT + 1];

    j = sizeof x * CHAR_BIT;
    buff[j--] = '\0';
    for (i = 0; i < sizeof x * CHAR_BIT; i++) {
    if (x & (1u << i)) {
    buff[j--] = '1';
    } else {
    buff[j--] = '0';
    }
    if ((1u << i) == UINT_MAX / 2 + 1) {
    break;
    }
    }
    while (i++ < sizeof x * CHAR_BIT) {
    buff[j--] = '0';
    }
    return buff;
    }

    /* END new.c */


    --
    pete
     
    pete, Mar 24, 2007
    #13
  14. "pete" <> wrote in message
    > Malcolm McLean wrote:
    >>
    >> "pete" <> wrote in message

    >
    >> > (1 << sizeof(int) * CHAR_BIT - 1) is undefined.
    >> >

    >> The function should take an unsigned int.

    >
    > That makes no difference.
    > The evaluation of (1 << sizeof(int) * CHAR_BIT - 1)
    > in a program is always undefined,
    > and prevents a program from being a "correct program".
    > (1 << sizeof(int) * CHAR_BIT - 1) can't be a positive value.
    >
    >> However I didn't want to add that
    >> complication for the OP.

    >
    > That expression would be perfect to use as
    > an example of how not to write code.
    >
    >> It should work OK on almost every platform.

    >
    > (1u << sizeof(int) * CHAR_BIT - 1) is defined.
    >
    > Your initial value of j is also wrong:
    >
    > int j = sizeof(int) * CHAR_BIT - 1;
    >
    > buff[j] = 0;
    > for(i=0;i<sizeof(int) * CHAR_BIT; i++)
    > {
    > if(x & (1 << i))
    > buff[j] = '1';
    > else
    > buff[j] = '0';
    >
    > As you can see in your code above,
    > the first side effect of the for loop,
    > is to overwrite the null terminator.
    >
    >
    > /* BEGIN new.c */
    >
    > #include <stdio.h>
    > #include <limits.h>
    >
    > char *itob(unsigned x);
    >
    > int main(void)
    > {
    > printf("%s\n", itob(100));
    > return 0;
    > }
    >
    > char *itob(unsigned x)
    > {
    > unsigned i;
    > unsigned j;
    > static char buff[sizeof x * CHAR_BIT + 1];
    >
    > j = sizeof x * CHAR_BIT;
    > buff[j--] = '\0';
    > for (i = 0; i < sizeof x * CHAR_BIT; i++) {
    > if (x & (1u << i)) {
    > buff[j--] = '1';
    > } else {
    > buff[j--] = '0';
    > }
    > if ((1u << i) == UINT_MAX / 2 + 1) {
    > break;
    > }
    > }
    > while (i++ < sizeof x * CHAR_BIT) {
    > buff[j--] = '0';
    > }
    > return buff;
    > }
    >
    > /* END new.c */
    >

    unsigned integers aren't allowed padding bits so you don't need all that
    complication.
    A pathological platform might break on the expression 1 << int bits - 1,
    agreed. To be strictly correct we need to do the calculations in unsigned
    integers, but I've explained why I didn't do that.
    The off by one error in writing the nul was a slip. Of course I didn't
    realise because the static array was zero intilaised anyway. So well
    spotted.
    --
    Free games and programming goodies.
    http://www.personal.leeds.ac.uk/~bgy1mm
     
    Malcolm McLean, Mar 24, 2007
    #14
  15. Carramba

    pete Guest

    Malcolm McLean wrote:

    > unsigned integers aren't allowed padding bits


    Wrong again.
    unsigned char isn't allowed padding bits.
    UINT_MAX is allowed to be as low as INT_MAX,
    and you can't achieve that without padding bits
    in the unsigned int type.

    --
    pete
     
    pete, Mar 24, 2007
    #15
  16. Carramba

    Joe Wright Guest

    Carramba wrote:
    > Harald van Dijk wrote:
    >> Carramba wrote:
    >>> Hi!
    >>>
    >>> How can I output value of char or int in binary form with printf(); ?
    >>>
    >>> thanx in advance

    >>
    >> There is no standard format specifier for binary form. You will have
    >> to do the conversion manually, testing each bit from highest to
    >> lowest, printing '0' if it's not set, and '1' if it is.
    >>

    > thanx, maybe you have so suggestion or link for further reading on how
    > to do it?


    Think about it!

    void bits(unsigned char b, int n) {
    for (--n; n >= 0; --n)
    putchar((b & 1 << n) ? '1' : '0');
    putchar(' ');
    }

    Now if you call it..

    bits(195, 8);

    ...you'll get '11000011 ' on the stdout stream.

    --
    Joe Wright
    "Everything should be made as simple as possible, but not simpler."
    --- Albert Einstein ---
     
    Joe Wright, Mar 24, 2007
    #16
  17. On 24 Mar 2007 04:42:24 -0700, "Harald van D?k" <>
    wrote:

    >Carramba wrote:
    >> Harald van D?k wrote:
    >> > Carramba wrote:
    >> >> Hi!
    >> >>
    >> >> How can I output value of char or int in binary form with printf(); ?
    >> >>
    >> >> thanx in advance
    >> >
    >> > There is no standard format specifier for binary form. You will have
    >> > to do the conversion manually, testing each bit from highest to
    >> > lowest, printing '0' if it's not set, and '1' if it is.
    >> >

    >> thanx, maybe you have so suggestion or link for further reading on how
    >> to do it?

    >
    >Others have given code already, but here's mine anyway:
    >
    >#include <limits.h>
    >#include <stdio.h>
    >
    >void print_char_binary(char val)
    >{
    > char mask;
    >
    > if(CHAR_MIN < 0)
    > {
    > if(val < 0
    > || val == 0 && val & CHAR_MAX)


    When will the expression following the && evaluate to 1? Is it
    something to do with ones complement or signed magnitude
    representations?

    > putchar('1');
    > else
    > putchar('0');
    > }
    >
    > for(mask = (CHAR_MAX >> 1) + 1; mask != 0; mask >>= 1)


    Is it a requirement that (CHAR_MAX>>1)+1 be a power of 2? It is a
    requirement for UCHAR_MAX but what if char is signed? (If CHAR_BIT is
    9, could SCHAR_MAX and CHAR_MAX be 173?)

    > if(val & mask)
    > putchar('1');
    > else
    > putchar('0');
    >}
    >
    >void print_int_binary(int val)
    >{
    > int mask;
    >
    > if(val < 0
    > || val == 0 && val & INT_MAX)
    > putchar('1');
    > else
    > putchar('0');
    >
    > for(mask = (INT_MAX >> 1) + 1; mask != 0; mask >>= 1)


    There does not appear to be a similar requirement for INT_MAX either.

    > if(val & mask)
    > putchar('1');
    > else
    > putchar('0');
    >}



    Remove del for email
     
    Barry Schwarz, Mar 24, 2007
    #17
  18. Barry Schwarz wrote:
    > On 24 Mar 2007 04:42:24 -0700, "Harald van D?k" <>
    > wrote:
    >
    > >Carramba wrote:
    > >> Harald van D?k wrote:
    > >> > Carramba wrote:
    > >> >> Hi!
    > >> >>
    > >> >> How can I output value of char or int in binary form with printf(); ?
    > >> >>
    > >> >> thanx in advance
    > >> >
    > >> > There is no standard format specifier for binary form. You will have
    > >> > to do the conversion manually, testing each bit from highest to
    > >> > lowest, printing '0' if it's not set, and '1' if it is.
    > >> >
    > >> thanx, maybe you have so suggestion or link for further reading on how
    > >> to do it?

    > >
    > >Others have given code already, but here's mine anyway:
    > >
    > >#include <limits.h>
    > >#include <stdio.h>
    > >
    > >void print_char_binary(char val)
    > >{
    > > char mask;
    > >
    > > if(CHAR_MIN < 0)
    > > {
    > > if(val < 0
    > > || val == 0 && val & CHAR_MAX)

    >
    > When will the expression following the && evaluate to 1? Is it
    > something to do with ones complement or signed magnitude
    > representations?


    It accounts for ones' complement, where all bits 1 is a possible
    representation of 0.

    It does not account for sign and magnitude, where all value bits 0 and
    sign bit 1 is a representation of 0. This will be printed as all bits
    zero, which is a different representation of the same value.

    > > putchar('1');
    > > else
    > > putchar('0');
    > > }
    > >
    > > for(mask = (CHAR_MAX >> 1) + 1; mask != 0; mask >>= 1)

    >
    > Is it a requirement that (CHAR_MAX>>1)+1 be a power of 2? It is a
    > requirement for UCHAR_MAX but what if char is signed? (If CHAR_BIT is
    > 9, could SCHAR_MAX and CHAR_MAX be 173?)


    [ And a similar comment for INT_MAX snipped ]

    The only allowed representation systems for signed integer types are
    two's complement, ones' complement, and sign and magnitude. All three
    have the maximum value as a power of two minus one. (IIRC, this is new
    in C99, but it was added because there were no other systems even
    though C90 allowed it.)
     
    =?utf-8?B?SGFyYWxkIHZhbiBExLNr?=, Mar 24, 2007
    #18
  19. "Harald van Dijk" <> wrote in message
    >
    > The only allowed representation systems for signed integer types are
    > two's complement, ones' complement, and sign and magnitude. All three
    > have the maximum value as a power of two minus one. (IIRC, this is new
    > in C99, but it was added because there were no other systems even
    > though C90 allowed it.)
    >

    That's typical committee thinking. No engineer is going to devise a new
    method of representing integers for the fun of it, but because there is some
    technical advantage or requirement. At which point the standard becomes a
    dead letter. If the super-whizzy-fibby machine needs Fibonacci
    representation for its quantum coherence modulator unit, the either C can't
    be used on such a machine or the rule will change. So it is a completely
    pointless regulation.

    --
    Free games and programming goodies.
    http://www.personal.leeds.ac.uk/~bgy1mm
     
    Malcolm McLean, Mar 24, 2007
    #19
  20. On 24 Mar 2007 10:51:58 -0700, "Harald van D?k" <>
    wrote:

    >Barry Schwarz wrote:
    >> On 24 Mar 2007 04:42:24 -0700, "Harald van D?k" <>
    >> wrote:
    >>
    >> >Carramba wrote:
    >> >> Harald van D?k wrote:
    >> >> > Carramba wrote:
    >> >> >> Hi!
    >> >> >>
    >> >> >> How can I output value of char or int in binary form with printf(); ?
    >> >> >>
    >> >> >> thanx in advance
    >> >> >
    >> >> > There is no standard format specifier for binary form. You will have
    >> >> > to do the conversion manually, testing each bit from highest to
    >> >> > lowest, printing '0' if it's not set, and '1' if it is.
    >> >> >
    >> >> thanx, maybe you have so suggestion or link for further reading on how
    >> >> to do it?
    >> >
    >> >Others have given code already, but here's mine anyway:
    >> >
    >> >#include <limits.h>
    >> >#include <stdio.h>
    >> >
    >> >void print_char_binary(char val)
    >> >{
    >> > char mask;
    >> >
    >> > if(CHAR_MIN < 0)
    >> > {
    >> > if(val < 0
    >> > || val == 0 && val & CHAR_MAX)

    >>
    >> When will the expression following the && evaluate to 1? Is it
    >> something to do with ones complement or signed magnitude
    >> representations?

    >
    >It accounts for ones' complement, where all bits 1 is a possible
    >representation of 0.
    >
    >It does not account for sign and magnitude, where all value bits 0 and
    >sign bit 1 is a representation of 0. This will be printed as all bits
    >zero, which is a different representation of the same value.
    >
    >> > putchar('1');
    >> > else
    >> > putchar('0');
    >> > }
    >> >
    >> > for(mask = (CHAR_MAX >> 1) + 1; mask != 0; mask >>= 1)

    >>
    >> Is it a requirement that (CHAR_MAX>>1)+1 be a power of 2? It is a
    >> requirement for UCHAR_MAX but what if char is signed? (If CHAR_BIT is
    >> 9, could SCHAR_MAX and CHAR_MAX be 173?)

    >
    >[ And a similar comment for INT_MAX snipped ]
    >
    >The only allowed representation systems for signed integer types are
    >two's complement, ones' complement, and sign and magnitude. All three
    >have the maximum value as a power of two minus one. (IIRC, this is new
    >in C99, but it was added because there were no other systems even
    >though C90 allowed it.)


    n1124 says that UCHAR_MAX must be equal to 2^CHAR_BIT-1 which I
    mentioned in my question. For SCHAR_MAX, there is no such
    requirement. It is required to be at least (minimum value) 127 which
    is 2^7-1 but for larger values of CHAR_BIT there is no additional
    restriction. Again, if CHAR_BIT is 9, could SCHAR_MAX and CHAR_MAX be
    173?


    Remove del for email
     
    Barry Schwarz, Mar 24, 2007
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Charles T.

    Binary data representation

    Charles T., Feb 4, 2004, in forum: C++
    Replies:
    11
    Views:
    817
    Charles T.
    Feb 5, 2004
  2. Mark Dufour
    Replies:
    0
    Views:
    306
    Mark Dufour
    Dec 16, 2003
  3. Rim
    Replies:
    3
    Views:
    1,081
    Dan Bishop
    Jan 27, 2004
  4. keto
    Replies:
    0
    Views:
    964
  5. David Cournapeau

    print a vs print '%s' % a vs print '%f' a

    David Cournapeau, Dec 30, 2008, in forum: Python
    Replies:
    0
    Views:
    363
    David Cournapeau
    Dec 30, 2008
Loading...

Share This Page