sara said:
a =100; /* need to convert it to hexa*/
long b=0x0 | a;
its ok for me . now .
You are a bit confused.
Internally computers always represent numbers in binary. It is not possible
for a human to view this representation directly, because it is electronic
and we cannot see electrons.
So to convert a number to human representation, you have several choices.
The obvious one is to convert the electronic representation into a pattern
of 1s and 0s which you display on a video screen. Thus when we talk about
"binary representation" we could mean one of three things, the pattern of
electrical charges that the computer uses internally, the pattern of glowing
dots that the user sees on the screen forming 1s and 0s, or the intermediate
format that the computer uses to go from electrical charges to glowing dots,
normally a representation of the number in ASCII.
Now binary numbers are quite difficult for the human eye to read, so usually
instead of outputing 1s and 0s, we output hexadecimal codes.
Since John Sacroboso introduced the Arabic number system, we in the West
have used a decimal system for representing numbers. C still uses this
convention, so
a = 100; in C means "put the value of a hundred (ten times ten) in variable
a".
However because programmers often like to know the binary bit pattern of the
numbers they use
a = 0xDEADBEEF; means "put the value 3735928559 in variable a"
the "Ox" tells the compiler that we are using the convention base 16 rather
than the convention base 10. However a = 0x10 and a = 16 means exactly the
same thing.
So your second line doesn't make any sort of sense. You are saying "put ten
time ten in variable a", then you are saying "do a logical or with zero, and
put the result in variable b".
You might as well just say "b = a;"