I was perusing example source that was used for serializing and
transmitting data across ethenet. Serialization of the data sure
solved the endian issues (ie no need to deal with all the endian
convesion mess) between the two platforms.
If you have a network connection that sends a stream of bytes from one
place to another, you need to encode whatever you want to send as a stream
of bytes. You then write code on the sending side to convert the
information, however it is stored internally, into that particular stream of
bytes. Then you write code on the receiving side to convert that stream of
bytes into data, however it is stored internally.
What I'm not understanding
is how is the serialized object different from the un-serialized object
at the machine level.
Suppose you are thinking about two apples. Somehow, that is stored
inside your head in a native format that makes sense to you. In that form,
it may not make any sense to anyone else. To communicate it to me, you
serialize it. You convert it into a sequence of sounds that can be
communicated to another person. They deserialize it when they receive it,
reconstructing the notion of "two apples" inside their own head.
You start with the available for of communication. In the case of the
two people in my example, it's mouth, air, ears. What can that channel
communicate? Only a sequence of sounds. So we need rules to encode concepts
like "two apples" into sequences of sounds and vice versa.
Same thing in your case. You have a channel, and it can transmit
sequences of bytes. So you write code to convert whatever you want to
communicate into a precisely defined sequence of bytes, and on the other
side, from bytes.
DS