integers to binary

R

rayw

I've been trying to write a program that converts an entered number into its
binary representation using three different techniques - bit shifting, mod
2, and itoa..

However, I'm getting 'snow blind', and can't get the non-ISO itoa to work as
I'd like. The snow blindness makes me think it's me, and not the function!

The idea of the program was to be able to change #define I_TYPE char to any
scalar type, re-compile, and then play. Limited success so far though, and
I'm sure the code could be 'better'.

Any constructive comments welcome.

#include <stdio.h >
#include <stdlib.h>
#include <string.h>
#include <limits.h>


#define I_TYPE char

#define K sizeof(I_TYPE) * CHAR_BIT


// Forward decs.
//
char * binary1(I_TYPE n);
char * binary2(I_TYPE n);
char * binary3(I_TYPE n);
char * strrev (char * str);



int main(void)
{
char buffer[100];

puts("Enter numbers to convert to binary, or hit enter to quit");

while(strlen(fgets(&buffer[0], sizeof(buffer), stdin)) > 1)
{
I_TYPE l = (I_TYPE)strtol(&buffer[0], NULL, 10);

puts(binary1(l));

puts(binary2(l));

puts(binary3(l));
}

return 0;
}



char * binary1(I_TYPE n)
{
// Loop counter.
//
int i = 0;

// Bit-mask.
//
unsigned long int j = 1;

// 'static' essential - buffer is a local variable!!
//
static char buffer[K + 1];

buffer[sizeof(buffer) - 1] = '\0';

for(i = 0; i < K; i++)
{
buffer = (n & j) == j ? '1' : '0';

j = j << 1;
}

return strrev(buffer);
}



char * binary2(I_TYPE n)
{
int i;

// 'static' essential - buffer is a local variable!!
//
static char buffer[K + 1];

buffer[sizeof(buffer) - 1] = '\0';

for(i = K - 1; i >= 0; n /= (unsigned long int)2)
{
buffer[i--] = n % 2 == 0 ? '0' : '1';
}

return buffer;
}



char * binary3(I_TYPE n)
{
// 'static' essential - buffer is a local variable!!
//
static char buffer[K + 1];

buffer[sizeof(buffer) - 1] = '\0';

// itoa() not a standard ISO function.
//
itoa(n, buffer, 2);

return buffer;
}



char * strrev(char * s)
{
char * p1;
char * p2;

if (!s || !*s)
{
return s;
}

else

{
for(p1 = s, p2 = s + strlen(s) - 1; p2 > p1; ++p1, --p2)
{
*p1 ^= *p2;
*p2 ^= *p1;
*p1 ^= *p2;
}
}

return s;
}
 
P

pete

rayw wrote:
However, I'm getting 'snow blind',
and can't get the non-ISO itoa to work as I'd like.
The snow blindness makes me think it's me, and not the function!
// itoa() not a standard ISO function.
//
itoa(n, buffer, 2);

This is the wrong newsgroup to ask about your itoa.
Doesn't your library have any documentation?
 
R

rayw

pete said:
This is the wrong newsgroup to ask about your itoa.
Doesn't your library have any documentation?

Yes, it does, but I was hoping to get some theory here as to what it's
doing: I don't have the source.

For example, if you run the program 'as posted' and enter 255 for the value,
the output is:

[compiler 1: gcc]
255
11111111
11111111
11111111111111111111111111111111

[compiler 2: msvc]
255
11111111
11111111
11111111111111111111111111112111

If you enter something like 128, you get this

[compiler 1: gcc]
128
10000000
10000000
11111111111111111111111110000000

[compiler 2: msvc]
128
10000000
10000000
11111111111111111111111110001000

but for 127, you get this

[compiler 1: gcc]
127
01111111
01111111
1111111

[compiler 2: msvc]
127
01111111
01111111
1111111

So, I *think* there's 'weird stuff going on' - and that's about as
descriptive as I can get.

btw, on trying to debug the above, I noticed that msvc's itoa overwrites the
buffer, e.g., for 255 it returns

11111111111111111111111111111111

<--- 32 --->

So, I altered my code in binary3 to set the buffer larger:

static char buffer[sizeof(unsigned long int) * CHAR_BIT + 1];

That's cured most of the weirdness, but I'd like to understand what's going
on when you use an initial value like 128 - like it's using signed long ints
internally or something?

[from *both* msvc and gcc now]
128
10000000
10000000
11111111111111111111111110000000

Any clues as to what itoa might be doing?
 
P

Pieter Droogendijk

I've been trying to write a program that converts an entered number into its
binary representation using three different techniques - bit shifting, mod
2, and itoa..

However, I'm getting 'snow blind', and can't get the non-ISO itoa to work as
I'd like. The snow blindness makes me think it's me, and not the function!

See below for comments about itoa().
The idea of the program was to be able to change #define I_TYPE char to any
scalar type, re-compile, and then play. Limited success so far though, and
I'm sure the code could be 'better'.

Why, yes. Yes it can.
Any constructive comments welcome.

Alright. I hope I haven't missed anything.
#include <stdio.h >
#include <stdlib.h>
#include <string.h>
#include <limits.h>


#define I_TYPE char

Alright. I'll play. No comment, given that I_TYPE can change.
#define K sizeof(I_TYPE) * CHAR_BIT

'K' is not a very verbose identifier. I wouldn't know what it's for by
looking at it.

Also, I_TYPE could contain padding bits and a sign bit. Perhaps you should
do something with xxx_MAX instead? I'll let you work on it.
char * binary1(I_TYPE n);
char * binary2(I_TYPE n);
char * binary3(I_TYPE n);
char * strrev (char * str);
int main(void)
{
char buffer[100];

puts("Enter numbers to convert to binary, or hit enter to quit");

while(strlen(fgets(&buffer[0], sizeof(buffer), stdin)) > 1)

Bang!

What if fgets() fails? It's going to return NULL when it does. Don't you
want to know?

Oh, and strlen(NULL) is undefined.
{
I_TYPE l = (I_TYPE)strtol(&buffer[0], NULL, 10);

As I said before, l is a char at the moment, though it could be anything.
strtol's return type doesn't fit in there unless you're very very lucky,
because strtol() always returns a long.

strtol() also has this neat error reporting thing. You should use it.

Here's how to 'fix' this thing:
- l should be a long. Remove the cast.
- Check for errors (is l negative? Does errno equal ERANGE and is l
LONG_MIN or LONG_MAX?)
- Check if the value of l fits in an I_TYPE.
- Assign l to an I_TYPE.
puts(binary1(l));
puts(binary2(l));
puts(binary3(l));
}
return 0;
}

You use a lot of blank lines that serve no purpose I can detect.
char * binary1(I_TYPE n)
{
// Loop counter.
//

C++ type comments have been included in the C standard in C99 (and has
been available as a compiler extension in many compilers for years
before it was published). However, C99 is not quite the leading standard
just yet. Also, these comments can cause painful things on Usenet, though
I'll admit this comment isn't quite long enough for that. Perhaps after
being quoted in follow-ups a few dozen times.
int i = 0;

Why not remove the comment and call it loop_counter?
// Bit-mask.
//
unsigned long int j = 1;

Why not remove the comment and call it bit_mask?

More comments on 'j' below.
// 'static' essential - buffer is a local variable!!
//
static char buffer[K + 1];

I'm a bit confused about this. Did you just declare buffer with static
storage duration because it's been defined in main() already? Because you
really don't have to. Read your C book up to and including the topic about
'scope'.
buffer[sizeof(buffer) - 1] = '\0';
for(i = 0; i < K; i++)
{
buffer = (n & j) == j ? '1' : '0';


This expression could be nicer. How's this:

if ( (n & j) == j ) {
buffer = '1';
}
else {
buffer = '0';
}
j = j << 1;

I can imagine what would happen if I_TYPE is wider than an unsigned long
int. Perhaps you should make j an I_TYPE? That would make sense, wouldn't
it?
}
return strrev(buffer);

You know how long the string's going to be in advance. Why not just fill
it the other way around (from K to 0 instead of from 0 to K)?

Though, okay, it's another method of converting it. I'll play.
char * binary2(I_TYPE n)
{
int i;

// 'static' essential - buffer is a local variable!!
//
static char buffer[K + 1];

Again with this mysterious static, and the even more mysterious comment...
buffer[sizeof(buffer) - 1] = '\0';

for(i = K - 1; i >= 0; n /= (unsigned long int)2)

How about:
for (i = K-1; i >= 0; i--)

That keeps the loop nice and central. You'll have to change some other
things too to make the loop work again, but I'll leave those to you.
{
buffer[i--] = n % 2 == 0 ? '0' : '1';

See the last comment about awful expressions. Conditional expressions
should generally be avoided (IMHO).
}
return buffer;
}
char * binary3(I_TYPE n)
{
// 'static' essential - buffer is a local variable!!
//
static char buffer[K + 1];
Repeat.

buffer[sizeof(buffer) - 1] = '\0';

// itoa() not a standard ISO function.
//
itoa(n, buffer, 2);

Indeed it isn't. I don't have it.

Though according to http://www.mkssoftware.com/docs/man3/itoa.3.asp this
should work, unless I've gone blind.

What does it make of it? What do you /expect/ it to make of it?
return buffer;
}
char * strrev(char * s)
{
char * p1;
char * p2;

if (!s || !*s)
{
return s;
}
else
{
for(p1 = s, p2 = s + strlen(s) - 1; p2 > p1; ++p1, --p2)
{
*p1 ^= *p2;
*p2 ^= *p1;
*p1 ^= *p2;

Though the swapping works, there's nothing wrong with just using a
temporary. I don't really know of a really good reason to do it this way.
 
R

rayw

Pieter Droogendijk said:
See below for comments about itoa().


Why, yes. Yes it can.

I'm a bit confused about this. Did you just declare buffer with static
storage duration because it's been defined in main() already? Because you
really don't have to. Read your C book up to and including the topic about
'scope'.

Each binary?'()s local buffer is declared static as each of them returns
&buffer[0] to the caller - so, nothing to do with scope, just don't want to
return a frame variable.
buffer[sizeof(buffer) - 1] = '\0';
for(i = 0; i < K; i++)
{
buffer = (n & j) == j ? '1' : '0';


This expression could be nicer. How's this:

if ( (n & j) == j ) {
buffer = '1';
}
else {
buffer = '0';
}


I agree that to some it's more readable, but as it's more lines of code, and
duplicates the assignment etc, I think it's more prone to errors later -
like if someone changes it.

Indeed it isn't. I don't have it.

Though according to http://www.mkssoftware.com/docs/man3/itoa.3.asp this
should work, unless I've gone blind.

Thanks for the ref and the comments.
 
C

Chuck F.

rayw said:
>
I've been trying to write a program that converts an entered
number into its binary representation using three different
techniques - bit shifting, mod 2, and itoa..

However, I'm getting 'snow blind', and can't get the non-ISO
itoa to work as I'd like. The snow blindness makes me think
> it's me, and not the function!

The idea of the program was to be able to change #define I_TYPE
char to any scalar type, re-compile, and then play. Limited
success so far though, and I'm sure the code could be 'better'.

Any constructive comments welcome.

.... snip long winded code etc. ...

The following, which includes a test routine enabled by defining
TESTING at compile time, will convert any unsigned integral types
to a string on the output file. To handle signed values just
output a - sign and then use putword for the negation. Watch out
for maximum values with 2's complement.

#include <stdio.h>

int putword(unsigned long w, FILE *f)
{
if (w > 9)
if (0 > putword(w / 10, f)) return EOF;
return putc((w % 10) + '0', f);
} /* putword */

/* --------------- */

#ifdef TESTING

#include <stdlib.h>

int main(void)
{
int i;

for (i = 0; i < 10; i++) {
putword(i, stdout);
putc(' ', stdout);
putword(rand(), stdout);
putc('\n', stdout);
}
return 0;
} /* main */
#endif
 
N

Netocrat

Yes, it does, but I was hoping to get some theory here as to what it's
doing: I don't have the source.

You haven't described the prototype, but likely the first parameter is an
int, to which your char would be converted, and which would require an
appropriately sized buffer...
So, I altered my code in binary3 to set the buffer larger:

static char buffer[sizeof(unsigned long int) * CHAR_BIT + 1];

That's cured most of the weirdness

....as you discovered.

Reading the documentation carefully will probably help explain any other
unexpected behaviour.

One (likely unrelated) error in your code is the space between the end
of "stdio.h" and the closing ">" in the #include directive.

Make sure you're compiling in standards mode with warnings enabled.

[...]
 
M

Mark McIntyre

Pieter Droogendijk said:
buffer = (n & j) == j ? '1' : '0';


This expression could be nicer. How's this:

if ( (n & j) == j ) {
buffer = '1';
}
else {
buffer = '0';
}


I agree that to some it's more readable, but as it's more lines of code, and
duplicates the assignment etc, I think it's more prone to errors later -
like if someone changes it.


IME readable code is considerably easier to maintain than unreadable
code. Write for readability first. Consider clever tricks only if
trying to obfuscate code.

if ( (n & j) == j )
buffer='1';
else
buffer='0';
 
R

rayw

One (likely unrelated) error in your code is the space between the end
of "stdio.h" and the closing ">" in the #include directive.

That's an error? I thought the preprocessor ignored such things?
 
N

Netocrat

That's an error? I thought the preprocessor ignored such things?

"<" and ">" are delimiters. Everything between them is interpreted as the
header name, including the space. The same applies when double quotes are
used as the delimiters. (a newline character within the header name
delimiters is invalid though)
 
O

Old Wolf

rayw wrote:

[n is a char]
Yes, it does, but I was hoping to get some theory here as to
what it's doing: I don't have the source.

The relevant line in the documentation is:

char *itoa(int value, char *string, int radix);

and in particular, that itoa is expecting 'value' to be an int.
For example, if you run the program 'as posted' and enter
255 for the value, the output is:

[compiler 1: gcc]
11111111111111111111111111111111

[compiler 2: msvc]
11111111111111111111111111112111

If you enter something like 128, you get this
[compiler 1: gcc]
11111111111111111111111110000000

[compiler 2: msvc]
11111111111111111111111110001000

but for 127, you get this
[compiler 1: gcc]
1111111
[compiler 2: msvc]
1111111

So, I *think* there's 'weird stuff going on' - and that's about as
descriptive as I can get.

128 and 255 are not valid values for 'char'. char values are in
the range -128 to 127 [*]. You cause your program to be
unreliable when you assign out-of-range values to chars.

Here is a fuller description of what is happening, please bear
in mind that this behaviour is peculiar to some systems only
(eg. IA32), and doesn't hold in the general case.
When you assign the int 128 to your char, the compiler tries
to squish 0x00000080 into eight bits, which results in 0x80,
which is the value -128 when used as the bit-pattern for a char.
So the itoa function receives the value of -128. To find out
what itoa does when you give it a negative number you will
have to read your compiler's documentation (mine has an
itoa function but it only specifies negative number behaviour
when radix is 10).

Based on your output I'd say that it is printing the bit pattern
of the negative number, and in binary the int -128 is indeed
11111111111111111111111110000000 .
Since your buffer is less than 33 bytes, anything could
happen really, you're lucky that GCC even managed to
not screw up the end of the buffer, although MSVC clearly
did (as it is entitled to, of course).

A similar explanation applies to the 255 case (which gets
the value -1, and has bit-pattern of all 1s).

[*] In general they can be in a variety of ranges, and the actual
range is specified by CHAR_MIN and CHAR_MAX in limits.h .
But it's clear from your post that your system has this range.
 
O

Old Wolf

rayw said:
I've been trying to write a program that converts an entered number
into its binary representation using three different techniques - bit
shifting, mod 2, and itoa..

See my other post explaining about your use of itoa.
I have some other comments on y our code which
hopefully you will find helpful.
#include <stdio.h >

No space !
#include <limits.h>
#include <stdlib.h>
#include <string.h>
#define I_TYPE char

Most importantly, you should use an unsigned type
for anything that you are planning to do bit-shifts on,
because signed types cause trouble as you have seen.

It is usually better to use a typedef:
typedef unsigned int I_TYPE;

Also you will find it easier to start off with an int, and
then work down to a char (everything in C is designed
to work best with ints).
#define K sizeof(I_TYPE) * CHAR_BIT

// Forward decs.
//
char * binary1(I_TYPE n);
char * binary2(I_TYPE n);
char * binary3(I_TYPE n);
char * strrev (char * str);

Strictly speaking you should not call your functions anything
starting with str followed by a lower case letter, because
it might clash with future updates to string.h .
int main(void)
{
char buffer[100];
puts("Enter numbers to convert to binary, or hit enter to quit");

while(strlen(fgets(&buffer[0], sizeof(buffer), stdin)) > 1)

fgets() returns NULL on failure.. you'll need to re-arrange this test.
Also you can just write 'buffer' everywhere you have &buffer[0],
it means exactly the same thing in most contexts.
{
I_TYPE l = (I_TYPE)strtol(&buffer[0], NULL, 10);

Don't use this cast. It doesn't gain you anything.
puts(binary1(l));
puts(binary2(l));
puts(binary3(l));
}

return 0;
}

char * binary1(I_TYPE n)
{
int i = 0;
unsigned long int j = 1;
static char buffer[K + 1];

buffer[sizeof(buffer) - 1] = '\0';
for(i = 0; i < K; i++)
{
buffer = (n & j) == j ? '1' : '0';
j = j << 1;


This will give you unexpected results if n is negative -- which is
why I_TYPE should be unsigned.

Also it's possible to be more concise (and clearer IMHO):

buffer = (n & j) ? '1' : '0';
j <<= 1;
}

return strrev(buffer);
}

char * binary2(I_TYPE n)
{
int i;
static char buffer[K + 1];
buffer[sizeof(buffer) - 1] = '\0';

for(i = K - 1; i >= 0; n /= (unsigned long int)2)

You can write 2UL instead of the cast notation.
buffer[i--] = n % 2 == 0 ? '0' : '1';
return buffer;
}

char * strrev(char * s)
{
char * p1;
char * p2;

if (!s || !*s)
return s;
else
for(p1 = s, p2 = s + strlen(s) - 1; p2 > p1; ++p1, --p2)
{
*p1 ^= *p2;
*p2 ^= *p1;
*p1 ^= *p2;
}

Please don't do that: it just confuses people who haven't
seen it before. Far clearer is:

char tmp = *p1;
*p1 = *p2;
*p2 = tmp;
 
R

rayw

Old Wolf said:
rayw wrote:

[n is a char]
Yes, it does, but I was hoping to get some theory here as to
what it's doing: I don't have the source.

The relevant line in the documentation is:

char *itoa(int value, char *string, int radix);

and in particular, that itoa is expecting 'value' to be an int.
For example, if you run the program 'as posted' and enter
255 for the value, the output is:

[compiler 1: gcc]
11111111111111111111111111111111

[compiler 2: msvc]
11111111111111111111111111112111

If you enter something like 128, you get this
[compiler 1: gcc]
11111111111111111111111110000000

[compiler 2: msvc]
11111111111111111111111110001000

but for 127, you get this
[compiler 1: gcc]
1111111
[compiler 2: msvc]
1111111

So, I *think* there's 'weird stuff going on' - and that's about as
descriptive as I can get.

128 and 255 are not valid values for 'char'. char values are in
the range -128 to 127 [*]. You cause your program to be
unreliable when you assign out-of-range values to chars.

Here is a fuller description of what is happening, please bear
in mind that this behaviour is peculiar to some systems only
(eg. IA32), and doesn't hold in the general case.
When you assign the int 128 to your char, the compiler tries
to squish 0x00000080 into eight bits, which results in 0x80,
which is the value -128 when used as the bit-pattern for a char.
So the itoa function receives the value of -128. To find out
what itoa does when you give it a negative number you will
have to read your compiler's documentation (mine has an
itoa function but it only specifies negative number behaviour
when radix is 10).

Based on your output I'd say that it is printing the bit pattern
of the negative number, and in binary the int -128 is indeed
11111111111111111111111110000000 .
Since your buffer is less than 33 bytes, anything could
happen really, you're lucky that GCC even managed to
not screw up the end of the buffer, although MSVC clearly
did (as it is entitled to, of course).

A similar explanation applies to the 255 case (which gets
the value -1, and has bit-pattern of all 1s).

[*] In general they can be in a variety of ranges, and the actual
range is specified by CHAR_MIN and CHAR_MAX in limits.h .
But it's clear from your post that your system has this range.

Many thanks - clear now!
 
R

rayw

Old Wolf said:
rayw said:
I've been trying to write a program that converts an entered number
into its binary representation using three different techniques - bit
shifting, mod 2, and itoa..

See my other post explaining about your use of itoa.
I have some other comments on y our code which
hopefully you will find helpful.
#include <stdio.h >

No space !
#include <limits.h>
#include <stdlib.h>
#include <string.h>
#define I_TYPE char

Most importantly, you should use an unsigned type
for anything that you are planning to do bit-shifts on,
because signed types cause trouble as you have seen.

It is usually better to use a typedef:
typedef unsigned int I_TYPE;

Also you will find it easier to start off with an int, and
then work down to a char (everything in C is designed
to work best with ints).
#define K sizeof(I_TYPE) * CHAR_BIT

// Forward decs.
//
char * binary1(I_TYPE n);
char * binary2(I_TYPE n);
char * binary3(I_TYPE n);
char * strrev (char * str);

Strictly speaking you should not call your functions anything
starting with str followed by a lower case letter, because
it might clash with future updates to string.h .
int main(void)
{
char buffer[100];
puts("Enter numbers to convert to binary, or hit enter to quit");

while(strlen(fgets(&buffer[0], sizeof(buffer), stdin)) > 1)

fgets() returns NULL on failure.. you'll need to re-arrange this test.
Also you can just write 'buffer' everywhere you have &buffer[0],
it means exactly the same thing in most contexts.
{
I_TYPE l = (I_TYPE)strtol(&buffer[0], NULL, 10);

Don't use this cast. It doesn't gain you anything.
puts(binary1(l));
puts(binary2(l));
puts(binary3(l));
}

return 0;
}

char * binary1(I_TYPE n)
{
int i = 0;
unsigned long int j = 1;
static char buffer[K + 1];

buffer[sizeof(buffer) - 1] = '\0';
for(i = 0; i < K; i++)
{
buffer = (n & j) == j ? '1' : '0';
j = j << 1;


This will give you unexpected results if n is negative -- which is
why I_TYPE should be unsigned.

Also it's possible to be more concise (and clearer IMHO):

buffer = (n & j) ? '1' : '0';
j <<= 1;
}

return strrev(buffer);
}

char * binary2(I_TYPE n)
{
int i;
static char buffer[K + 1];
buffer[sizeof(buffer) - 1] = '\0';

for(i = K - 1; i >= 0; n /= (unsigned long int)2)

You can write 2UL instead of the cast notation.
buffer[i--] = n % 2 == 0 ? '0' : '1';
return buffer;
}

char * strrev(char * s)
{
char * p1;
char * p2;

if (!s || !*s)
return s;
else
for(p1 = s, p2 = s + strlen(s) - 1; p2 > p1; ++p1, --p2)
{
*p1 ^= *p2;
*p2 ^= *p1;
*p1 ^= *p2;
}

Please don't do that: it just confuses people who haven't
seen it before. Far clearer is:

char tmp = *p1;
*p1 = *p2;
*p2 = tmp;
return s;
}


Many thanks - v.useful comments
 
S

Simon Biber

Niklas said:
Mark McIntyre said:
buffer = (n & j) == j ? '1' : '0';

if ( (n & j) == j )
buffer='1';
else
buffer='0';


buffer = ((n & j) == j) + '0';

Style issue.


Indeed it is a style issue. However, your contribution is too clever for
its own good. It's correct, but it's harder to read and harder to
understand. Remember that C code is written 10% for the compiler and 90%
for the maintenance programmer. Understanding this code requires some
extra connections to be made:
1. a Boolean expression has an integer value of 0 or 1
2. digits are sequential, ie. '0' + 1 == '1'
While any good C programmer knows these two facts, it takes time and
brain cycles to put it all together, which could be avoided by not being
so clever.
 
P

pete

Simon said:
Niklas said:
Mark McIntyre said:
if ( (n & j) == j )
buffer='1';
else
buffer='0';


buffer = ((n & j) == j) + '0';

Style issue.


Indeed it is a style issue. However,
your contribution is too clever for
its own good. It's correct, but it's harder to read and harder to
understand. Remember that C code is written
10% for the compiler and 90%
for the maintenance programmer. Understanding this code requires some
extra connections to be made:
1. a Boolean expression has an integer value of 0 or 1
2. digits are sequential, ie. '0' + 1 == '1'
While any good C programmer knows these two facts, it takes time and
brain cycles to put it all together,
which could be avoided by not being so clever.


I liked "rayw"'s original version:

buffer = (n & j) == j ? '1' : '0';

That's just about as simple a use of the conditional operator,
as there can be.
Some people just don't like the conditional operator,
but I'm not one of them.
 
T

Tim Rentsch

Mark McIntyre said:
Pieter Droogendijk said:
On Tue, 13 Dec 2005 10:21:09 +0000, rayw wrote:

buffer = (n & j) == j ? '1' : '0';

This expression could be nicer. How's this:

if ( (n & j) == j ) {
buffer = '1';
}
else {
buffer = '0';
}


I agree that to some it's more readable, but as it's more lines of code, and
duplicates the assignment etc, I think it's more prone to errors later -
like if someone changes it.


IME readable code is considerably easier to maintain than unreadable
code. Write for readability first. Consider clever tricks only if
trying to obfuscate code.

if ( (n & j) == j )
buffer='1';
else
buffer='0';


The original single assignment is precisely the sort of situation
where using the ?: operator makes sense. If Mr. McIntyre says he
finds the if/else form with two assignments more readable, I'm expect
that's so, but his reaction isn't shared by everyone. IMO the
single assignment using ?: is both more readable and more clear
than using if/else and two assignments.
 
P

Pieter Droogendijk

I'm a bit confused about this. Did you just declare buffer with static
storage duration because it's been defined in main() already? Because you
really don't have to. Read your C book up to and including the topic about
'scope'.

Each binary?'()s local buffer is declared static as each of them returns
&buffer[0] to the caller - so, nothing to do with scope, just don't want to
return a frame variable.

Whoops.
Forget my comments about it. Erase them from Usenet. It was temporary
insanity. I was tired. It was the dog that done it.
buffer = (n & j) == j ? '1' : '0';


This expression could be nicer. How's this:

if ( (n & j) == j ) {
buffer = '1';
}
else {
buffer = '0';
}


I agree that to some it's more readable, but as it's more lines of code, and
duplicates the assignment etc, I think it's more prone to errors later -
like if someone changes it.


Sure it's six lines for what could be one assignment, but it's as clear
as it's going to get. That's what I aim for, Unless obfuscation is what I
want.

But each to their own, of course.
Thanks for the ref and the comments.

Google rules.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,768
Messages
2,569,575
Members
45,052
Latest member
KetoBeez

Latest Threads

Top