C
Carramba
Hi!
How can I output value of char or int in binary form with printf(); ?
thanx in advance
How can I output value of char or int in binary form with printf(); ?
thanx in advance
Carramba said:Hi!
How can I output value of char or int in binary form with printf(); ?
thanx in advance
Hi!
How can I output value of char or int in binary form with printf(); ?
thanx in advance
Carramba said:How can I output value of char or int in binary form with printf(); ?
#include <limits.h>Carramba said:Hi!
How can I output value of char or int in binary form with printf(); ?
thanx in advance
thanx, maybe you have so suggestion or link for further reading on howHarald said:There is no standard format specifier for binary form. You will have
to do the conversion manually, testing each bit from highest to
lowest, printing '0' if it's not set, and '1' if it is.
why sizeof(int) * CHAR_BIT + 1 ? what does it mean?Malcolm said:#include <limits.h>Carramba said:Hi!
How can I output value of char or int in binary form with printf(); ?
thanx in advance
/*
convert machine number to human-readable binary string.
Returns: pointer to static string overwritten with each call.
*/
char *itob(int x)
{
static char buff[sizeof(int) * CHAR_BIT + 1];
why sizeof(int) * CHAR_BIT - 1 ? what does it mean?int i;
int j = sizeof(int) * CHAR_BIT - 1;
buff[j] = 0;
for(i=0;i<sizeof(int) * CHAR_BIT; i++)
{
if(x & (1 << i))
buff[j] = '1';
else
buff[j] = '0';
j--;
}
return buff;
}
Call
int x = 100;
printf("%s", itob(x));
You might want something more elaborate to cut leading zeroes or handle
negative numbers.
Carramba said:thanx, maybe you have so suggestion or link for further reading on how
to do it?
Carramba said:Hi!
How can I output value of char or int in binary form with printf(); ?
thanx in advance
If you want to put an in't binary representation ionto a string you needCarramba said:thanx ! have few questions about this code
why sizeof(int) * CHAR_BIT + 1 ? what does it mean?Malcolm said:#include <limits.h>Carramba said:Hi!
How can I output value of char or int in binary form with printf(); ?
thanx in advance
/*
convert machine number to human-readable binary string.
Returns: pointer to static string overwritten with each call.
*/
char *itob(int x)
{
static char buff[sizeof(int) * CHAR_BIT + 1];
Arrays count from 0 to n and the terminating null byte isn't needed , so thewhy sizeof(int) * CHAR_BIT - 1 ? what does it mean?
buff[j] = 0;
for(i=0;i<sizeof(int) * CHAR_BIT; i++)
{
if(x & (1 << i))
buff[j] = '1';
else
buff[j] = '0';
j--;
}
return buff;
}
Call
int x = 100;
printf("%s", itob(x));
You might want something more elaborate to cut leading zeroes or handle
negative numbers
Malcolm said:for(i=0;i<sizeof(int) * CHAR_BIT; i++)
{
if(x & (1 << i))
The function should take an unsigned int. However I didn't want to add thatpete said:There are some problems with that shift expression.
(1 << sizeof(int) * CHAR_BIT - 1) is undefined.
Malcolm said:The function should take an unsigned int.
However I didn't want to add that
complication for the OP.
It should work OK on almost every platform.
unsigned integers aren't allowed padding bits so you don't need all thatpete said:Malcolm said:The function should take an unsigned int.
That makes no difference.
The evaluation of (1 << sizeof(int) * CHAR_BIT - 1)
in a program is always undefined,
and prevents a program from being a "correct program".
(1 << sizeof(int) * CHAR_BIT - 1) can't be a positive value.
However I didn't want to add that
complication for the OP.
That expression would be perfect to use as
an example of how not to write code.
It should work OK on almost every platform.
(1u << sizeof(int) * CHAR_BIT - 1) is defined.
Your initial value of j is also wrong:
int j = sizeof(int) * CHAR_BIT - 1;
buff[j] = 0;
for(i=0;i<sizeof(int) * CHAR_BIT; i++)
{
if(x & (1 << i))
buff[j] = '1';
else
buff[j] = '0';
As you can see in your code above,
the first side effect of the for loop,
is to overwrite the null terminator.
/* BEGIN new.c */
#include <stdio.h>
#include <limits.h>
char *itob(unsigned x);
int main(void)
{
printf("%s\n", itob(100));
return 0;
}
char *itob(unsigned x)
{
unsigned i;
unsigned j;
static char buff[sizeof x * CHAR_BIT + 1];
j = sizeof x * CHAR_BIT;
buff[j--] = '\0';
for (i = 0; i < sizeof x * CHAR_BIT; i++) {
if (x & (1u << i)) {
buff[j--] = '1';
} else {
buff[j--] = '0';
}
if ((1u << i) == UINT_MAX / 2 + 1) {
break;
}
}
while (i++ < sizeof x * CHAR_BIT) {
buff[j--] = '0';
}
return buff;
}
/* END new.c */
Malcolm said:unsigned integers aren't allowed padding bits
Carramba said:thanx, maybe you have so suggestion or link for further reading on how
to do it?
Others have given code already, but here's mine anyway:
#include <limits.h>
#include <stdio.h>
void print_char_binary(char val)
{
char mask;
if(CHAR_MIN < 0)
{
if(val < 0
|| val == 0 && val & CHAR_MAX)
putchar('1');
else
putchar('0');
}
for(mask = (CHAR_MAX >> 1) + 1; mask != 0; mask >>= 1)
if(val & mask)
putchar('1');
else
putchar('0');
}
void print_int_binary(int val)
{
int mask;
if(val < 0
|| val == 0 && val & INT_MAX)
putchar('1');
else
putchar('0');
for(mask = (INT_MAX >> 1) + 1; mask != 0; mask >>= 1)
if(val & mask)
putchar('1');
else
putchar('0');
}
Barry said:When will the expression following the && evaluate to 1? Is it
something to do with ones complement or signed magnitude
representations?
Is it a requirement that (CHAR_MAX>>1)+1 be a power of 2? It is a
requirement for UCHAR_MAX but what if char is signed? (If CHAR_BIT is
9, could SCHAR_MAX and CHAR_MAX be 173?)
That's typical committee thinking. No engineer is going to devise a newHarald van Dijk said:The only allowed representation systems for signed integer types are
two's complement, ones' complement, and sign and magnitude. All three
have the maximum value as a power of two minus one. (IIRC, this is new
in C99, but it was added because there were no other systems even
though C90 allowed it.)
Barry said:When will the expression following the && evaluate to 1? Is it
something to do with ones complement or signed magnitude
representations?
It accounts for ones' complement, where all bits 1 is a possible
representation of 0.
It does not account for sign and magnitude, where all value bits 0 and
sign bit 1 is a representation of 0. This will be printed as all bits
zero, which is a different representation of the same value.
Is it a requirement that (CHAR_MAX>>1)+1 be a power of 2? It is a
requirement for UCHAR_MAX but what if char is signed? (If CHAR_BIT is
9, could SCHAR_MAX and CHAR_MAX be 173?)
[ And a similar comment for INT_MAX snipped ]
The only allowed representation systems for signed integer types are
two's complement, ones' complement, and sign and magnitude. All three
have the maximum value as a power of two minus one. (IIRC, this is new
in C99, but it was added because there were no other systems even
though C90 allowed it.)
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.