V
vj
Hello group,
I am working on a compression tool and saw this puzzling bit shit
behaviour on a VC++6.0 compiler.
#include <iostream>
using namespace std;
typedef unsigned char uchar;
#define NBITS(x) ((uchar)(~0u<<sizeof(0u)*8-(x) >>sizeof(0u)*8-(x)))
// NBITS forms a byte containing specified number of bits turned on.
void main()
{ int k=0;
cout<<hex<<(int) NBITS(k)<<endl;
cout<<hex<<(int) NBITS(0)<<endl;
}
/****output***/
ff
00
/****output ends***/
intrestingly i am getting two diffrent values for the same input. I
really dont know whats causing it, but i have a huntch that the
Preprocessor is calculating(for the constant value 0) the values
diffrent way then the code that generated by the compiler ( which is
used at the run time).
Thanks in advance,
I am working on a compression tool and saw this puzzling bit shit
behaviour on a VC++6.0 compiler.
#include <iostream>
using namespace std;
typedef unsigned char uchar;
#define NBITS(x) ((uchar)(~0u<<sizeof(0u)*8-(x) >>sizeof(0u)*8-(x)))
// NBITS forms a byte containing specified number of bits turned on.
void main()
{ int k=0;
cout<<hex<<(int) NBITS(k)<<endl;
cout<<hex<<(int) NBITS(0)<<endl;
}
/****output***/
ff
00
/****output ends***/
intrestingly i am getting two diffrent values for the same input. I
really dont know whats causing it, but i have a huntch that the
Preprocessor is calculating(for the constant value 0) the values
diffrent way then the code that generated by the compiler ( which is
used at the run time).
Thanks in advance,