J
James Vanns
OK. Platforms are the same. Hosts are the same (localhost for
testing).
Server is written in C++, client in Java.
I have come to the end of my tether trying to 'decode' the header of
this bespoke protocol we've (my team) written. We have the client
written in C++ too and everything works hunky dorey (because you can
use points and casts and stuff!).
Anyway, the server sends a structure (struct) as a series of bytes:
typedef struct __header_t__ {
static const char name[5];
static const char split;
static const uint version;
static const char delimiter;
uint command;
static const char terminator;
} __attribute__ ((packed)) header_t;
Thats what the definition is for what its worth. So, thats 8 bytes for
the chars and 8 bytes for the ints (each int being 4 bytes on this
platform).
Now the Java client receives the correct number of bytes but I'll be
buggered if it can interpret them in the same/correct way! I know that
in Java a char is 16-bits as it uses unicode rather than ascii so I
have been using bytes (still 8-bits right?) to try and print the info
out (as a debug statement).
The first 4 bytes (chars in C/C++ case) is a printable string "HELO"
for example. But when I try and print the same on the Java client its
rubbish.
Any help please! In case you haven't noticed - I'm not a Java
programmer.
Regards
Jim Vanns
testing).
Server is written in C++, client in Java.
I have come to the end of my tether trying to 'decode' the header of
this bespoke protocol we've (my team) written. We have the client
written in C++ too and everything works hunky dorey (because you can
use points and casts and stuff!).
Anyway, the server sends a structure (struct) as a series of bytes:
typedef struct __header_t__ {
static const char name[5];
static const char split;
static const uint version;
static const char delimiter;
uint command;
static const char terminator;
} __attribute__ ((packed)) header_t;
Thats what the definition is for what its worth. So, thats 8 bytes for
the chars and 8 bytes for the ints (each int being 4 bytes on this
platform).
Now the Java client receives the correct number of bytes but I'll be
buggered if it can interpret them in the same/correct way! I know that
in Java a char is 16-bits as it uses unicode rather than ascii so I
have been using bytes (still 8-bits right?) to try and print the info
out (as a debug statement).
The first 4 bytes (chars in C/C++ case) is a printable string "HELO"
for example. But when I try and print the same on the Java client its
rubbish.
Any help please! In case you haven't noticed - I'm not a Java
programmer.
Regards
Jim Vanns