How do I convert an int to binary form?

C

cedarson

I am having trouble writing a simple code that will convert an int (
487 for example ) to binary form just for the sake of printing the
binary. Can someone please help? Thanks!
 
O

osmium

I am having trouble writing a simple code that will convert an int (
487 for example ) to binary form just for the sake of printing the
binary. Can someone please help? Thanks!

Use the left shift operator and the & operator. Skip the leading zeros and
when you first encounter a 1 in the leftmost position, print a char '1'.
Keep shifting and printing '1' or '0' depending on the int.
 
C

CBFalconer

I am having trouble writing a simple code that will convert an int (
487 for example ) to binary form just for the sake of printing the
binary. Can someone please help? Thanks!

Just this once, provided you read my sig and the URLs there
referenced.

#include <stdio.h>

/* ---------------------- */

static void putbin(unsigned int i, FILE *f) {

if (i / 2) putbin(i / 2, f);
putc((i & 1) + '0', f);
} /* putbin */

/* ---------------------- */

int main(void) {

putbin( 0, stdout); putc('\n', stdout);
putbin( 1, stdout); putc('\n', stdout);
putbin(-1, stdout); putc('\n', stdout);
putbin( 2, stdout); putc('\n', stdout);
putbin(23, stdout); putc('\n', stdout);
putbin(27, stdout); putc('\n', stdout);
return 0;
} /* main */

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
 
M

Mike Wahler

I am having trouble writing a simple code that will convert an int (
487 for example ) to binary form just for the sake of printing the
binary. Can someone please help? Thanks!

Hints:

12345 % 10 == 5
12345 / 10 == 1234

Decimal numbers have base 10
Binary numbers have base 2

An array can be traversed either forward
or backward.

-Mike
 
J

Joe Wright

I am having trouble writing a simple code that will convert an int (
487 for example ) to binary form just for the sake of printing the
binary. Can someone please help? Thanks!
We're not supposed to do it for you but I'm feeling generous..

#define CHARBITS 8
#define SHORTBITS 16
#define LONGBITS 32
#define LLONGBITS 64

typedef unsigned char uchar;
typedef unsigned short ushort;
typedef unsigned long ulong;
typedef unsigned long long ullong;

void bits(uchar b, int n) {
for (--n; n >= 0; --n)
putchar((b & 1 << n) ? '1' : '0');
putchar(' ');
}

void byte(uchar b) {
bits(b, CHARBITS);
}

void word(ushort w) {
int i;
for (i = SHORTBITS - CHARBITS; i >= 0; i -= CHARBITS)
byte(w >> i);
putchar('\n');
}

...with my compliments.
 
J

Joe Wright

pete said:
Why not use CHAR_BIT instead?
Why not indeed.



I don't like those kind of typedefs.
I didn't know that. Sorry. I like it.
My prefered way of writing that is:

while (n-- > 0)
Right you are too. I'll try to do it that way from now on.
 
K

Keith Thompson

Joe Wright said:
We're not supposed to do it for you but I'm feeling generous..

#define CHARBITS 8
#define SHORTBITS 16
#define LONGBITS 32
#define LLONGBITS 64

typedef unsigned char uchar;
typedef unsigned short ushort;
typedef unsigned long ulong;
typedef unsigned long long ullong;

If your CHARBITS, SHORTBITS, LONGBITS, and LLONGBITS are intended to
represent the minimum guaranteed number of bits, they're reasonable,
but you really need to document your intent. If they're intended to
represent the actual sizes of the types, the values you use are wrong
on some systems. You should use CHAR_BIT from <limits.h>; for the
others, you can use constructs like "CHAR_BIT * sizeof(short)" if you
don't mind your code breaking on a system that uses padding bits.

Typedefs like "uchar" for unsigned char are useless. Just use
"unsigned char" directly, so your readers don't have to guess what
"uchar" means. Likewise for the others. If you think there's some
value in saving keystrokes, use an editor that supports editor macros.
 
J

Joe Wright

Keith said:
If your CHARBITS, SHORTBITS, LONGBITS, and LLONGBITS are intended to
represent the minimum guaranteed number of bits, they're reasonable,
but you really need to document your intent. If they're intended to
represent the actual sizes of the types, the values you use are wrong
on some systems. You should use CHAR_BIT from <limits.h>; for the
others, you can use constructs like "CHAR_BIT * sizeof(short)" if you
don't mind your code breaking on a system that uses padding bits.
Can you give a real example of "CHAR_BIT * sizeof (short)" (16 on my Sun
and x86 boxes) breaking? Do any modern CPU architectures have padding
bits in short, int, or long objects? Which?
Typedefs like "uchar" for unsigned char are useless. Just use
"unsigned char" directly, so your readers don't have to guess what
"uchar" means. Likewise for the others. If you think there's some
value in saving keystrokes, use an editor that supports editor macros.
Taste? Everyone I know, seeing "uchar" in type context will read
"unsigned char". You?
 
P

pete

Joe said:
Taste? Everyone I know, seeing "uchar" in type context will read
"unsigned char".

If you're debugging or otherwise modifying
or maintaining code that has "uchar" in it,
then it is something that needs to be looked up.

ITYM "U?".
 
J

Joe Wright

pete said:
Joe Wright wrote:




If you're debugging or otherwise modifying
or maintaining code that has "uchar" in it,
then it is something that needs to be looked up.
So? Look it up.



ITYM "U?".
Why would you think I meant "U?"?

I don't post often. When I do it is usually in response to a "How can I
do this?" question. I usually respond with a program example of how to
do it. To the extent that I understand the question, the programs I post
are correct. That you and at least one other respond with "I wouldn't do
it that way" seems odd to me. Why would pete tell the World that Joe
Wright uses "typedef unsigned char uchar;" and shouldn't because pete
doesn't like it?

Every now and then Chuck F. and I get into a code contest, not so much
about errors but how to do it better. I enjoy that. Chuck is good.

If you want a coding competition, I'm your man. You post something for
me to improve or I'll post something. We'll tease it until it's perfect
for all clc to see. That will be fun. We'll try to keep it instructional
so that our audience might learn something.

Do you like it? Do you want me to start, or will you?
 
K

Keith Thompson

Joe Wright said:
Can you give a real example of "CHAR_BIT * sizeof (short)" (16 on my
Sun and x86 boxes) breaking? Do any modern CPU architectures have
padding bits in short, int, or long objects? Which?

Yes. I have an account on an old Cray Y-MP vector system (admittedly
not a modern machine). It has:

CHAR_BIT == 8
sizeof(short) == 8 (64 bits)
SHRT_MIN == -2147483648
SHRT_MAX == 2147483647
USHRT_MAX == 4294967295

So short and unsigned short have 32 padding bits.

I don't know whether the same is true on the more modern Cray vector
systems, but I wouldn't bet against it. If you want to write code
that runs on, say, all Unix-like systems, you need to take this into
account (for example, the system in question has Perl 5.6.0, a large
program implemented in C).

If you don't care about portability to such systems, you might consider
arranging for your code to make some sanity checks and abort if they
fail.
Taste? Everyone I know, seeing "uchar" in type context will read
"unsigned char". You?

Sure, if I see "uchar" I'll *assume* that it's a typedef for "unsigned
char" -- but I won't be 100% sure until I check the declaration. A
programmer might decide that he wants to use 16-bit quantities rather
than 8-bit quantities, and that the easiest way to do it is to change
"typedef unsigned char uchar;" to "typedef unsigned short uchar;".
I'm not saying you'd do such a thing, but I can't be sure that
J. Random Programmer wouldn't.

If you want unsigned char, use unsigned char. If you want a typedef
that might change, use a descriptive name that doesn't imply one
specific predefined type.

Using "unsigned char" rather than "uchar" can avoid confusion. Do you
have an argument in favor of using "uchar"?
 
K

Keith Thompson

Joe Wright said:
So? Look it up.

If you use "unsigned char", nobody will have to look it up.
Why would you think I meant "U?"?

"U" : "You" :: "uchar" : "unsigned char"

To expand that a bit, both "U" and "uchar" are abbreviations that make
things more difficult for readers with no real benefit.
I don't post often. When I do it is usually in response to a "How can
I do this?" question. I usually respond with a program example of how
to do it. To the extent that I understand the question, the programs I
post are correct. That you and at least one other respond with "I
wouldn't do it that way" seems odd to me. Why would pete tell the
World that Joe Wright uses "typedef unsigned char uchar;" and
shouldn't because pete doesn't like it?
[...]

Presumably because there are valid and objective reasons to prefer
"unsigned char" over "uchar".

BTW, none of this has anything to do with the fact that Joe Wright was
the one who posted this. We're discussing C; don't take it personally.
 
B

Ben Bacarisse

If you use "unsigned char", nobody will have to look it up.

I head agrees with you, but I must confess to guiltily writing such
typedefs when I think no one will see them. I offer this in defense.

I think of things the affect a program's understandability (bad word but
readability is too superfical) as falling into three categories:

(a) There are superficial things like layout, brace and bracket placement,
naming conventions etc. and, yes, if main is declared correctly. These
form a sort of "constant" order complexity (O(1)) in understanding because
no matter how bad, once every single style you can think of has been
abused, that's it. It can't get any worse.

(b) Things like unusual patterns of #include, not using #include "guards",
chains of "shorthand" typedefs and so on. The effect of these on
understandability is, in theory, unbounded but it does not take much
intellectual effort to unravel. These are O(n) complexity issues.

(c) The Really Bad Ones. Pretty much all the hard problems that come from
poor memory management, illogical design, obscure control flow and so on
are much worse than anything that comes from (a) or (b). These can make
for exponential compexity in understanding.

There are exceptions, of couse. I think some typedefs help readability
(pointer to function types spring to mind) and bad macros can make things
unreadable faster than almost anything else, but because type (c) problems
are hard to discuss in general terms, types (a) and (b) get too much blame
for the harm they can cause.
 
P

pete

I can do extra work if I have to, I just don't like having to.
Deliberately using a coding style which creates
extra work in reading the code, makes no sense to me.
If you use "unsigned char", nobody will have to look it up.


"U" : "You" :: "uchar" : "unsigned char"

Thank you Keith Thompson.

To expand that a bit, both "U" and "uchar" are abbreviations that make
things more difficult for readers with no real benefit.

I use this forum to discuss C.
C coding style is on topic.
Presumably because there are valid and objective reasons to prefer
"unsigned char" over "uchar".

BTW, none of this has anything to do with the fact that Joe Wright was
the one who posted this.
We're discussing C; don't take it personally.

That's what I think.
 
J

Joe Wright

Keith said:
Yes. I have an account on an old Cray Y-MP vector system (admittedly
not a modern machine). It has:

CHAR_BIT == 8
sizeof(short) == 8 (64 bits)
SHRT_MIN == -2147483648
SHRT_MAX == 2147483647
USHRT_MAX == 4294967295

So short and unsigned short have 32 padding bits.

I don't know whether the same is true on the more modern Cray vector
systems, but I wouldn't bet against it. If you want to write code
that runs on, say, all Unix-like systems, you need to take this into
account (for example, the system in question has Perl 5.6.0, a large
program implemented in C).

If you don't care about portability to such systems, you might consider
arranging for your code to make some sanity checks and abort if they
fail.




Sure, if I see "uchar" I'll *assume* that it's a typedef for "unsigned
char" -- but I won't be 100% sure until I check the declaration. A
programmer might decide that he wants to use 16-bit quantities rather
than 8-bit quantities, and that the easiest way to do it is to change
"typedef unsigned char uchar;" to "typedef unsigned short uchar;".
I'm not saying you'd do such a thing, but I can't be sure that
J. Random Programmer wouldn't.

If you want unsigned char, use unsigned char. If you want a typedef
that might change, use a descriptive name that doesn't imply one
specific predefined type.

Using "unsigned char" rather than "uchar" can avoid confusion. Do you
have an argument in favor of using "uchar"?
I used "uchar" and the others, especially "ullong" to save keystrokes
and to improve readability. And I think it does that.

That you don't think so is neither here nor there. I do not post here to
instruct people how they must do something, but how they might do it.

I take my C programming very seriously. If I post code here which is
'wrong' I will appreciate very much being corrected.

I know I seem to be taking all this a little too personally. Perhaps a
persecution complex. I'll get over it.
 
P

pete

Joe said:
Keith Thompson wrote:
I used "uchar" and the others, especially "ullong" to save keystrokes
and to improve readability. And I think it does that.

Saving keystrokes is OK.
The point of contention is readability.
That you don't think so is neither here nor there.
I do not post here to
instruct people how they must do something, but how they might do it.

The best way to write code, is very on topic,
regardless of whether or not there actually is a best way.
Right you are too. I'll try to do it that way from now on.

My real prefered way of doing that
is to have n be an unsigned integer type
and to use the inequality operator,
but I didn't want to be too pushy.

I've promoted that way of looping through an array
here on more than one occassion.

http://groups.google.com/group/comp.lang.c/msg/c0103a58a6d6e4e0

The first time that our friend CBFalconer
noticed my (n-- != 0) stepping through an array,
as I recall, he really didn't like it.
I can't recall any words from that thread to google on though.

It took him a while to get used to it.

http://groups.google.com/group/comp.lang.c/msg/76918442af5e6884

In the above post,
Lawrence Kirby said that the (n-- != 0) way,
wasn't his favorite.
I didn't have anything else to say
that I hadn't already mentioned elsethread,
so I didn't reply to it.
Even though I can't claim that the method is indisputabley the best,
I can still discuss it and say why *I* think it is.

By the time that the "Implementing my own memcpy" thread
came up, CBFalconer had come around.

http://groups.google.com/group/comp.lang.c/msg/758f034e126b05cb
 
C

CBFalconer

pete said:
.... snip ...

The first time that our friend CBFalconer
noticed my (n-- != 0) stepping through an array,
as I recall, he really didn't like it.
I can't recall any words from that thread to google on though.

It took him a while to get used to it.

I don't recall that. I would be more likely to omit the "!= 0"
portion though. If n was a pointer I still object.

.... snip ...
By the time that the "Implementing my own memcpy" thread
came up, CBFalconer had come around.

http://groups.google.com/group/comp.lang.c/msg/758f034e126b05cb

I'm offline, so can't use that to refresh any memories. I never
"come around". I may occasionally expand my horizons. Extreme
crusty dogmatism is the watchword here.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
 
K

Keith Thompson

Joe Wright said:
I used "uchar" and the others, especially "ullong" to save keystrokes
and to improve readability. And I think it does that.

Saving keystrokes isn't much of a virtue. If I thought it actually
improved readability, I'd agree that it's a good idea -- just as I
wouldn't mind people writing "u" for "you" if it actually improved
readability.
That you don't think so is neither here nor there. I do not post here
to instruct people how they must do something, but how they might do
it.

And I haven't said that your code is incorrect, merely that its style
makes it more difficult to read. In this particular case (unlike some
other style points) I happen to have some reasonably objective
arguments to back up my opinion; I won't repeat them here.
I take my C programming very seriously. If I post code here which is
'wrong' I will appreciate very much being corrected.

Of course, but correctness isn't the only criterion for good code,
especially for code posted here. Code is read more often than it's
written, and *if* pete and I are typical, you might consider adjusting
your style to something that's more easily read.
I know I seem to be taking all this a little too personally. Perhaps a
persecution complex. I'll get over it.

Good (seriously). And let me say one more time that there's
absolutely nothing personal in any of this.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top