I've never said I do not want comments on what I'm writing.. Comments
are always acceptable...
Well then, stop trying to justify yourself. ;-) ;-)
Wrong. Assuming 'v' is an int, you're invoking undefined behavior
by trying to look at its bits as if they represented a valid 'char'
value. So anything can happen, including nasal demons or wrong
answers.
For a 32-bit machine would
union boo {
unsigned int i;
unsigned char c[4];
};
union boo a;
and looking at a.c[0]
be more acceptable??
Not from the standard's point of view. Let's assume that by
"32-bit machine" you mean "an implementation on which 'CHAR_BIT'
is 8 and 'sizeof(int)' is 4." Then it is *still* possible that
INT_MAX could be 32767 or 65535 (I think) or some other number,
and a.c[0] could be composed entirely of padding bits and thus
irrelevant to the "endianness" of the machine (whatever that
means in a padding-bits context).
Actually, it would be a *little* more "acceptable"; now that
you've switched to 'unsigned char', the Standard guarantees that
'unsigned char' has no trap representations. Thus what you have
now is either implementation-defined or unspecified behavior,
depending on your interpretation of the Standard (I think); but
not undefined behavior, as it was before.
Anyway, the basic point I guess is that you haven't fully defined
what you mean by "endianness." Sure, there are two or three obvious
"endian" cases, but how do you classify the big gray area including
padding bits and weird bit orders?
If you assume as a prerequisite that the implementation *must*
have either the 1234 or the 4321 byte order, and no padding bits,
then your program works perfectly. But standard C doesn't make that
assumption -- it makes different assumptions such that it is actually
impossible to determine whether a given 'int' is big- or little-endian
in a portable fashion. (I'm fairly sure that's right -- something
like the Halting Problem probably applies somehow.)
HTH,
-Arthur