Cast from int to unsigned char in powerpc arch

N

nightolo

Hi all,
after all I have to say that I'm not a native english speaker so I pray
you to excuse for my poor english.

I got a trouble writing an application in a powerpc enviroment, that's
the problem:
A long time ago I studied that in powerpc we have got a big-endian
rappresentation and in x86 we got a little-endian.

For example an int like 0x2122 will be 0x00002122 in ppc (big-endian)
and 0x22210000 in x86 (little-endian). I think that's this theory is
right, I tested it with this code:

int main()
{
int i=0x11223344, e=0;
char *p;


for(e=0; e != 4; e++) {
p=(void *)&i+e;
printf("-%x- ",*p);
}
printf("\n");
return 0;
}

It obviously print different value in a powerpc or x86 arch, like
theory says.

So, if I write this piece of code on a x86:

int tmp;
tmp = 0x2122;
printf("casted tmp = %x\n" (unsigned char) tmp);

it returns 0x22, that seems ok, because 22 is the first byte in memory
(little-endian order).
But if I write the same code and compile it on powerpc arch I get the
*same* value!! it's strange, I should get 0x21, shouldn't I?

I don't know, manual says that htons and htonl point to a null macros
and that's ok, we don't need to convert a big-endian into a big-endian,
but with a cast I get this strange behaviour.. probably I mistake in
theory, expecially about a cast..
Any suggestion?

Thanks in advance,
Antonio
 
L

Lew Pitcher

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi all,
after all I have to say that I'm not a native english speaker so I pray
you to excuse for my poor english.

I got a trouble writing an application in a powerpc enviroment, that's
the problem:
A long time ago I studied that in powerpc we have got a big-endian
rappresentation and in x86 we got a little-endian.

For example an int like 0x2122 will be 0x00002122 in ppc (big-endian)
and 0x22210000 in x86 (little-endian). I think that's this theory is
right, I tested it with this code:

int main()
{
int i=0x11223344, e=0;
char *p;


for(e=0; e != 4; e++) {
p=(void *)&i+e;
printf("-%x- ",*p);
}
printf("\n");
return 0;
}

It obviously print different value in a powerpc or x86 arch, like
theory says.

So, if I write this piece of code on a x86:

int tmp;
tmp = 0x2122;
printf("casted tmp = %x\n" (unsigned char) tmp);

it returns 0x22, that seems ok, because 22 is the first byte in memory
(little-endian order).

Actually, endian order has nothing to do with the results you get from /that/
code fragment.
But if I write the same code and compile it on powerpc arch I get the
*same* value!! it's strange, I should get 0x21, shouldn't I?

No. From /that/ code fragment, you should get 0x22 when CHAR_BITS == 8

What you want to try is this...

{
int tmp = 0x2122;

printf("tmp trimmed = %x\n", *((char *)&tmp));
}

The reason your code always prints 22 is that the printf() statement is
evaluating the numerical value of the tmp variable, not it's arrangement in storage.

- - tmp contains the number 0x2122
- - casting tmp to unsigned char causes the value to undergo high-order
truncation, retaining only as much of the value as can fit in one unsigned
char. In your case, the 0x22 part fits in the unsigned char, and the rest
(the overflow) is discarded
- - you then print this value.

What you need to do is to write code to evaluate the individual char-sized
components of the tmp variable. You can do this through a pointer, an array, or
a union. The example above uses a pointer.

[snip]


- --
Lew Pitcher
IT Consultant, Enterprise Data Systems,
Enterprise Technology Solutions, TD Bank Financial Group

(Opinions expressed are my own, not my employers')
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (MingW32)

iD8DBQFBvcSiagVFX4UWr64RAiXfAJkBphoUyRu9LyQV9+nX6K89Y/ImjgCeOMmw
UZQsEKentwfGq7bNDoYmOOs=
=hR+f
-----END PGP SIGNATURE-----
 
M

Mark A. Odell

[snip]
For example an int like 0x2122 will be 0x00002122 in ppc (big-endian)
and 0x22210000 in x86 (little-endian). I think that's this theory is
right, I tested it with this code:

int main()
{
int i=0x11223344, e=0;
char *p;


for(e=0; e != 4; e++) {
p=(void *)&i+e;
printf("-%x- ",*p);
}
printf("\n");
return 0;
}

It obviously print different value in a powerpc or x86 arch, like
theory says.

So, if I write this piece of code on a x86:

int tmp;
tmp = 0x2122;
printf("casted tmp = %x\n" (unsigned char) tmp);

it returns 0x22, that seems ok, because 22 is the first byte in memory
(little-endian order).
But if I write the same code and compile it on powerpc arch I get the
*same* value!! it's strange, I should get 0x21, shouldn't I?

Of course, an no you shouldn't get the wrong answer. The people who write
compilers for big-endian machines know how to make the C langauge work the
same as it does on little-endian machines. C is a portable language. You
might be thinking of pointers, there, the platform-specificness of
endianness can be visible to the programmer.
 
N

nightolo

Lew said:
What you need to do is to write code to evaluate the individual char-sized
components of the tmp variable. You can do this through a pointer, an array, or
a union. The example above uses a pointer.
Ok, thanks a lot, it's my mistake =)

Antonio
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,572
Members
45,046
Latest member
Gavizuho

Latest Threads

Top