D
duylam76
I'm programming a server and corresponding client application that
sends data to each other over a socket. I want to encode an integer to
send over the socket. Right now everything is fine because everyone
using our app has an integer encoded as 32 bits. I'm not sure what
might happen should someone use a machine/OS that uses 64 bit
integers, however.
For instance, say that I know for certain that our server uses 32 bit
integers. I can send an integer across the socket as 4 bytes, and read
it on the client side directly by dereferencing an int pointer
pointing at the first of the four bytes read off. If the client side
uses 64 bits for an integer, however, directly reading it using an int
pointer won't really work (the pointer dereference will look for 8
bytes instead of 4).
Since I don't have a 64 bit machine to test this, for theoretical
purposes I want to run a similar test to see what would happen if I
wanted instead to encode my integers as 16 bits (and read it as a 32
bit int).
How do I read a 16-bit block (a block within a simple byte array) and
store the number as a 32-bit integer?
For instance, say I have
char* buffer; // pointer to char buffer that stores some encoded
data
int a; // the integer I want to store in
.... and I know that the first two bytes in "buffer" contain the
encoding for a 16-bit integer. I want to read the first two bytes and
then store the value into the integer "a".
If the first FOUR bytes in "buffer" stored the int, I would be able to
do it easily:
a = *((int*) buffer);
So how would I be able to do this if only TWO bytes represented the
integer? Is there an entirely better way to read off a value other
than dereferencing an integer pointer? (Again, I'm only doing 16 and
32 because I can directly test it on my machine. But since my real
goal is to handle 32 and 64, any other advice regarding 32 bit or 64
bit data formats might be helpful too)
Duy Lam
sends data to each other over a socket. I want to encode an integer to
send over the socket. Right now everything is fine because everyone
using our app has an integer encoded as 32 bits. I'm not sure what
might happen should someone use a machine/OS that uses 64 bit
integers, however.
For instance, say that I know for certain that our server uses 32 bit
integers. I can send an integer across the socket as 4 bytes, and read
it on the client side directly by dereferencing an int pointer
pointing at the first of the four bytes read off. If the client side
uses 64 bits for an integer, however, directly reading it using an int
pointer won't really work (the pointer dereference will look for 8
bytes instead of 4).
Since I don't have a 64 bit machine to test this, for theoretical
purposes I want to run a similar test to see what would happen if I
wanted instead to encode my integers as 16 bits (and read it as a 32
bit int).
How do I read a 16-bit block (a block within a simple byte array) and
store the number as a 32-bit integer?
For instance, say I have
char* buffer; // pointer to char buffer that stores some encoded
data
int a; // the integer I want to store in
.... and I know that the first two bytes in "buffer" contain the
encoding for a 16-bit integer. I want to read the first two bytes and
then store the value into the integer "a".
If the first FOUR bytes in "buffer" stored the int, I would be able to
do it easily:
a = *((int*) buffer);
So how would I be able to do this if only TWO bytes represented the
integer? Is there an entirely better way to read off a value other
than dereferencing an integer pointer? (Again, I'm only doing 16 and
32 because I can directly test it on my machine. But since my real
goal is to handle 32 and 64, any other advice regarding 32 bit or 64
bit data formats might be helpful too)
Duy Lam