M
Mantorok Redgormor
#include <stdio.h>
#include <limits.h>
int main(void)
{
unsigned int mask;
int a = -1;
mask = 1u << (CHAR_BIT * sizeof mask - 1);
while(mask) {
putchar(a & mask ? '1' : '0');
mask>>=1u;
}
putchar('\n');
return 0;
}
I was wanting to display the underlying representation
of signed and unsigned integers. I don't think I am
invoking undefined behavior but if I am, could someone
point it out?
Also, I can only use integer types with bitwise AND
And this is a convenient way of testing against
a mask. So how would one determine the bits
set in a float or double type to display the
underlying representation in binary?
I thought about using the examples from a
previously similar question I asked.
Where the examples given displayed the bytes in hex
I was thinking maybe I could display each individual
byte by displaying those bytes in binary consecutively.
But maybe there is a more clever way?
#include <limits.h>
int main(void)
{
unsigned int mask;
int a = -1;
mask = 1u << (CHAR_BIT * sizeof mask - 1);
while(mask) {
putchar(a & mask ? '1' : '0');
mask>>=1u;
}
putchar('\n');
return 0;
}
I was wanting to display the underlying representation
of signed and unsigned integers. I don't think I am
invoking undefined behavior but if I am, could someone
point it out?
Also, I can only use integer types with bitwise AND
And this is a convenient way of testing against
a mask. So how would one determine the bits
set in a float or double type to display the
underlying representation in binary?
I thought about using the examples from a
previously similar question I asked.
Where the examples given displayed the bytes in hex
I was thinking maybe I could display each individual
byte by displaying those bytes in binary consecutively.
But maybe there is a more clever way?