Interplatform (interprocess, interlanguage) communication

A

Arne Vajhøj

well, it is possible.

often it ends up with a cycle where something is implemented once (or
maybe a few times), and very often if something similar is needed later,
code is reused via "copy/paste/edit" magic.

in my case though, admittedly I am not actually employed as a
programmer, but am more of a college student + independent game
developer (mostly working on a 3D FPS style game). like, one has to
"face the impossible" and so on (and, with luck, getting something on
the market and getting enough money to live on, and trying to make newer
and better stuff, ...).

That is all fine.

But many of your conclusions does not fit with a more traditional
developer job.

Arne
 
M

Martin Gregorie

C is not standard on Windows either.

You need to get some things.
Yes, so if you're intending to write code that ports easily between
Windows and *nix/POSIX (and in my case, OS-9), you end up writing a
compatibility library for each target OS. This is mainly a collection of
functions that are standard in one of the other target OSen and absent on
its target system. A good example is the command line parser getopt(),
which is absent from OS_9 and (IIRC) Windows libraries.
 
A

Arne Vajhøj

Yes, so if you're intending to write code that ports easily between
Windows and *nix/POSIX (and in my case, OS-9), you end up writing a
compatibility library for each target OS. This is mainly a collection of
functions that are standard in one of the other target OSen and absent on
its target system. A good example is the command line parser getopt(),
which is absent from OS_9 and (IIRC) Windows libraries.

Doesn't getopt exist in some GNU lib that you can get for all
platforms?

Arne
 
B

BGB

Yes, so if you're intending to write code that ports easily between
Windows and *nix/POSIX (and in my case, OS-9), you end up writing a
compatibility library for each target OS. This is mainly a collection of
functions that are standard in one of the other target OSen and absent on
its target system. A good example is the command line parser getopt(),
which is absent from OS_9 and (IIRC) Windows libraries.

yes, this is what is often done.

one ends up with essentially a pile of code intended to wrap various
APIs for various OS's, such that internally the app can use a more
consistent API.

there is also SDL, which on one hand wraps a lot of this stuff, but OTOH
is a 3rd party library which carries the usual issues. a way to make
this work though is to treat SDL as if it were a pseudo-OS (its wrappers
are wrapped in much the same way).

lots of other apps I have looked at seem to contain similar wrapper layers.


some people go further, and try to write wrappers to hide the
differences between Direct3D and OpenGL, but I don't personally go that
far (I just use OpenGL and regard it as "good enough").

however, due to secondary reasons (mostly making things more consistent,
like having a more consistent APIs for dealing with things like
shader/material effects, lighting, ...), a lot of OpenGL has ended up
being wrapped (in an an admittedly often ad-hoc manner).

I have noted that Doom3 tends to wrap OpenGL far more significantly
(though, it would probably be going off on a bit of a tangent to
describe Doom3's renderer here).
 
B

BGB

The fact that there is exceptions to most rules should not lead to
a perception that rules does not matter.

You should strive to go by the rules and only very reluctant go
for the exception if it is really needed.

possible.

others may go for an "all is allowed in programming, so long as it works
ok and gets the job done" mindset. whether or not rules are followed may
in turn depend on an evaluation of whether or not the rules work in
ones' favor.

so, on one hand: well, I can follow this rule, and get certain desirable
effects.

or, it may also work out as: this rule is stupid and inconvenient, I am
not going to bother following it.

or maybe: the existing rule is stupid/inconvenient/..., so I am going to
make up my own rules and follow them instead.


this does not necessarily mean making a standard of non-standard, as
some piece of standardized technology (formally, or de-facto, it really
doesn't matter) may itself carry desirable benefits.

as well noted, PNGs and JPEGs are an example of this:
they allow compatibility with existing applications which use these
formats, etc, ...

so, although one could devise their own graphics format (I have done so
before), using it may turn out to be so incredibly inconvenient for
everyone involved that using it is ultimately not worth the bother.


likewise, in the everyday world, breaking laws may lead in turn to the
police breaking down ones' door, and breaking moral and ethical rules
may lead to various other consequences (do bad things and bad things may
follow in turn).

so, all this doesn't give a person to do "whatever they want, whenever
they want", because the rules of cost/benefit will prevent this (too
many costs in these cases, defeating the benefits).

likewise, making a standard of non-standard, though not inherently bad,
would likely end up being overly costly (in terms of use or maintenance
or whatever else).


but, I am not going to try to list all of the costs and benefits one
might encounter or how one may weight them, as there are too many and
how much each may apply in a given situation is itself prone to vary.
 
J

Joshua Cranmer

safely, one can use, on Windows:
the ANSI C runtime (more or less C89/C90);
any Win32 API provided stuff (Winsock, GDI, OpenGL, ...);
...

I call BS on this, having worked on a major open-source project that
works on the major platforms of Windows, Mac OS X, and Linux (and also
Android, and I think Solaris and *BSD are still reasonably
well-supported, although the Haiku and OS/2 ports are now thoroughly
dead). What libraries does this? A small list:
* libpng
* libjpg
* libogg + related
* cairo
* thebes
* sqlite
* freetype
* libbz2
* libjar
* zlib

And all of these are still used on Windows; there are even more that are
used only on Linux or Mac OS X.

Which application is this? Mozilla.

It's not that hard to use other libraries on Windows.
 
B

BGB

But please note that you do not invent your own JSON parser
either - you use something already done. In Java there are json.org,
gson etc..

in Java, yes, one may use the libraries.

on the C end, one may choose to throw one together, or use a JavaScript
VM if one is available, ...

it all depends.

I can not follow you way of thinking. With multiple interactions
in parallel there are no strict correlation between latency and
throughput.

there is a rough correlation though.


for the part about TCP, this was related to how TCP worked (in its
traditional form), namely the existence of a 64kB maximum window size.

apparently, this is out of date, as there is a feature known as
RFC-1323, which is enabled by default on Windows Vista and newer, which
allows a larger TCP window.

http://tools.ietf.org/html/rfc1323


for the part about moderating kB/s, this has a lot more to do with a
users' internet connection.

say, hypothetically, a user has dial-up.

now, what if the data being sent does not fit over dial-up (one is
trying to send 10kB/s, but a 56k modem can only handle ~6.5kB/s or so)?
well, then, the connection will backlog (the connection will send at the
rate it can send, and anything else will have to wait).

similar limits may exist over the internet, but in a less direct form:
consider, the internet is prone to occasionally drop a packet here or there.

so, stream is going over the internet, and a (single) packet drops, what
happens:
well, all the data up to the dropped packet reaches the other end, the
other end may send a packet back indicating the point recieved;
the sender will start resending data from that point;
the reciever will start transmitting again.

this results in essentially a ping-time delay in which no data can be sent.

if the sender is sending messages at a fixed rate, what happens?
well then, the messages will pile up, waiting to be sent;
after transmission resumes, several updates worth of data need to be sent;
if all of the updates fit within the bandwidth of the connection
(end-to-end), then there is may be no obvious stall (updates can all be
sent at full speed);
if the enough data back-logs so as to exceed the bandwidth available,
then it has to wait to be sent, and if the sender just keeps naively
sending updates, then essentially one gets a stall (and the data being
received by the receiver will start becoming progressively more
out-of-date).


these properties can be observed with things like internet radio and
video streaming (if the connection is fast enough, playback happens in
real-time without obvious stalls or re-buffering, even though the rate
at which the data comes over the internet is often very irregular).

similar also applies to internet telephony as well.


if one tries to operate within a fixed-bandwidth window, similar to
internet radio, most minor stalls can be glossed over (this limit being
a bit lower than the end-to-end transfer rate of the connection). going
lower is better, since the lower one goes, the more room there for error
there is.

the main issue is, namely, that the data being sent has to be able to
fit within these bandwidth limits (hence, why data compression is highly
desirable in this case).


an online game basically amounts to a bidirectional stream between the
client and server, with the server sending out a stream of updates
(typically, everything going on in the immediate view of the client),
and the client sends a stream of their attempted actions (in response to
what they see on screen).

if everything is working well, then the delays and irregularities of
their internet connection is mostly hidden, and to them it all seems
like they are interacting with the world in real-time (usually there is
a lot of trickery here as well, mostly based around linear extrapolation
and so on).

side note: each end may transmit time-stamps as part of their updates,
and the other end may transmit the last-received timestamps, partly so
that the timing delays can be estimated and partially compensated for.


another (similar concept) for players playing games is the concept of
"leading", where a person will take aim at a moving enemy, estimate the
speed of the projectile and where the enemy will be at the time, and aim
and fire at that location instead (then the enemy will essentially "run
into" the traveling projectile). note that if a player always aims at
where the enemy is "right now", very often they will miss (as by the
time the projectile reaches the destination, the enemy has already moved
out of the way).

so, the game does similar in an attempt to hide the "travel time" that
is the internet.


or such...
 
B

BGB


fair enough.

I might look into it, although personally I don't use JSON for this at
the moment, and if it were needed in my-case, as-is my script VM can
parse JSON (given the language used is a superset of JavaScript
anyways), although this is potentially a less efficient strategy than a
dedicated parser.

say, in my case, it would be a tradeoff between either: using someone
else's library, passing the JSON through "eval", or spit out some logic
to make the VM parse the JSON directly (probably just copy/paste/edit
some of the existing parser code). then one could wonder secondarily:
what form would the JSON be parsed into? "there is a library for that"
is not always necessarily the least-effort option.

either way, one still might want to save more bytes, say, by running it
through deflate or similar. if one wants libraries, Java has it built
in, and in C-land there is zlib.

in my case, I also have a deflate codec which is stored as a single big
source file, mostly as this makes it a little more convenient (in
several ways) than using zlib. JPEG was similar, as originally I used
libjpeg, but reimplemented JPEG as a "single big source file" to be a
generally more convenient option (copy/paste the source file and go).

I don't claim a person might "always" want to do this, but as I see it,
it is still a potentially valid option. one could maybe go further, and
put Deflate, JPEG, PNG, and several other formats, all in a single big
file, at the drawback of the file becoming overly large.

similarly, the above probably would be fairly pointless in Java, both
because this stuff exists in the standard library, and also because this
would also result in a single giant class as well.


the cheapest option for one person may well turn out to be a more
expensive option for another person, say if they one of the options
amounts to "implement the functionality from the ground up" rather than
"copy-paste a few bits from over there and hack something together", or
even maybe just "add a few lines in a function over here and add a new
function or method over there which redirects the call to the first
function".

all of this stuff can be fairly relative, and there are rarely "cut and
dry" answers to problems.



as for the delay issue, found this article on part of the topic:
http://en.wikipedia.org/wiki/Lag_(online_gaming)

also maybe relevant:
http://en.wikipedia.org/wiki/Internet_streaming
 
B

BGB

I call BS on this, having worked on a major open-source project that
works on the major platforms of Windows, Mac OS X, and Linux (and also
Android, and I think Solaris and *BSD are still reasonably
well-supported, although the Haiku and OS/2 ports are now thoroughly
dead). What libraries does this? A small list:
* libpng
* libjpg
* libogg + related
* cairo
* thebes
* sqlite
* freetype
* libbz2
* libjar
* zlib

And all of these are still used on Windows; there are even more that are
used only on Linux or Mac OS X.

Which application is this? Mozilla.

It's not that hard to use other libraries on Windows.

yes, but it is also worth noting that Mozilla does the whole "Mozilla
build" thingy on Windows, and are essentially they are bundling many of
the needed libraries and tools with the application as a part of the
build system.


as noted:
it is not saying that one *can't* use 3rd party libraries, but one may
need to make special provisions for them.

Mozilla does this.

not everyone may want to do this, as it is a reasonably heavy-weight
solution to the problem (but still better than "hey random person, go
download and build all of these libraries yourself", which is what some
applications have gone and done).
 
M

Martin Gregorie

Doesn't getopt exist in some GNU lib that you can get for all platforms?
Pass. I know it was missing from Borland C on DOS and Windows and wasn't
published for OS-9. I ended up extending a PD version that was originally
published in the '68 MicroJournal for OS-9 and porting it to the other
two platforms.

I had a good (to me) reason for doing that. At the time I was more
familiar with OS-9 than Windows or Unix (Linux didn't exist at the time)
and I had got used to the OS-9 command line parser's ability to handle a
mix of options and arguments in any order rather than the straitjacket of
Unix's rigid options before arguments rule. My extension basically just
added the -x=value notation (also used by OS-9) to the standard Unix -
xvalue and -x value notation. I've since rewritten it for Java, adding
long option names (--xxxx and --xxxx=val) - something I've not gotten
round to adding to the C version.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,564
Members
45,040
Latest member
papereejit

Latest Threads

Top