What's beyond Rails?

J

Jonas Hartmann

James said:
Somewhat off-topic rant: This isn't so much a dig at Rails but a
critique of HTML in general. I've done web development with PHP,
ColdFusion, and ASP, and being able to use Ruby in doing so (especially
with Rails' well-designed database interactivity) is certainly a
welcome change. However, the general model is still the same, in terms
of using code to write out HTML to an essentially-static page.
First of all, the web is a big hyperlinked document.
The HTML interface is still such a far cry from the things you can do with a
rich client.
The internet is a network of information. You can pretty well
distribute information with hyperlinked text.

You can critize the kind of how HTML is made but not in this way.
Rich interfaces require users to work a lot with them, to learn to use
them. Simple, so mostly easy to use interfaces dont.
Text is black, Links are blue, that is simple, and if there is a good
hypertext "author" behind that, it works very well. (See Wikipedia or
DMoz)
For ordering airline tickets on Travelocity or books on
Amazon, the web works great
This is already an extension to the basic "task" of the web.
FORMs let the normal user put in some information back onto the network.
but imagine trying to emulate Adobe
Photoshop via a web browser, or a spreadsheet like Excel.
First of all, why should I?
It seems to me that there needs to be a next-generation of HTML that
enables web apps to truly be like rich client apps,
There is SVG, but I don't see the need.
and
I don't think the solution needs to be a faster connection that sucks
down the entire application in the form of massive Java applets every
time I want to use the program.
Massive flash applets - god how I like flash-click-to-play Firefox
extensions. Gone, all the trash that burns my eyes.
Perhaps the solution does need to be a
"computer" that's designed from the ground up as a web-enabled dumb
terminal, but that has forms and controls optimized so that they
require minimal data inflow to tell them what to do.
Now what do you want? Rich User Interfaces? Or Simple, easy ones? I
don't get you now?
To me, this would make the web incredibly more useful (and would put
serious potential into the claim that Google wants to become a web
operating system).
Web What? DMoz / Wikipedia / Leo ... > Google.
And IF, then Apache ( :) + Internet Explorer is the "OS of the Web" :-(
If I've purchased Adobe Photoshop (or rented it, as
I'm sure will be the more likely model),
OMG even more rental things :<
instead of loading it on every
computer I use, why can't I get to Photoshop at any computer in the
world merely by logging into my personal website and getting access to
every software program I own or am renting?
why would I want to rent software. if I am forced to use close
software I am already dependent enough on the manufacterer.
Why would it need to be
reloaded at every computer? This is particularly annoying when you're
visiting a friend in another city for a weekend, and jump onto his
computer to check email or show him how to do something useful, and
think, "I wish I had App X loaded on here right now."
Get a notebook. (I can recommend 12" a iBook G4 =)
I was disappointed to see Google Suggest being touted as innovative; it
seems to indicate that Google's going to stay within the existing web
realm and not try anything really new (as I read on the web somewhere,
"for a web app, Google Suggest is neat; for a desktop app, it's so
1995"). For all of Google's deep pockets and reputation as innovative,
I expected to see them partner up with a hardware manufacturer and try
something dramatically different.
I can't figure out what you want to say.

About HTML. Basically there are features missing in regards to forms.
You cant have "float values" in forms (no sliders, knobs or anything),
you cant do sliders with steppings (fixed value steps) and some of
these things need you to create workarounds.
Worst of all is that there is no interactivity without javascript -
what a pain.

xul is a nice approach but works only on GRE (gecko runtime engine)
atm afaik and yes there is SVG and SMIL - go write a browser (XUL +
ruby renderer ;p) the techniques are mostly there.

anyway I still don't see that there is such big need for these things.
there is still the possibility to use client<->server architectures
with OS native applications written in ruby or other languages
communicating with a server that is on the internet (but not www).

What we do need more is software that let people, everywhere, even
those not to familiar with the web, attach content they have to the
web WITH metadata, and LINKED into logical structures.

And in regards to this, there are some limits, even IF browser
supported current standards, with the current standards cause they do
not offer as many layout features as professional news paper designing
applications do offer. HTML just does not offer enough structure
elements (only 6 levels deep headers?) and CSS does not offer enough
layouting power. (try to have a text with an image floating centered,
vertically and horizontally... I didnt manage that yet =)

what has all this to do with rails or ruby?
 
J

jason_watkins

James said:
Somewhat off-topic rant: This isn't so much a dig at Rails but a
critique of HTML in general. I've done web development with PHP,
ColdFusion, and ASP, and being able to use Ruby in doing so (especially
with Rails' well-designed database interactivity) is certainly a
welcome change. However, the general model is still the same, in terms
of using code to write out HTML to an essentially-static page. The HTML
interface is still such a far cry from the things you can do with a
rich client. For ordering airline tickets on Travelocity or books on
Amazon, the web works great, but imagine trying to emulate Adobe
Photoshop via a web browser, or a spreadsheet like Excel.
It seems to me that there needs to be a next-generation of HTML that
enables web apps to truly be like rich client apps, and
I don't think the solution needs to be a faster connection that sucks
down the entire application in the form of massive Java applets every
time I want to use the program. Perhaps the solution does need to be a
"computer" that's designed from the ground up as a web-enabled dumb
terminal, but that has forms and controls optimized so that they
require minimal data inflow to tell them what to do.
To me, this would make the web incredibly more useful (and would put
serious potential into the claim that Google wants to become a web
operating system). If I've purchased Adobe Photoshop (or rented it, as
I'm sure will be the more likely model), instead of loading it on every
computer I use, why can't I get to Photoshop at any computer in the
world merely by logging into my personal website and getting access to
every software program I own or am renting? Why would it need to be
reloaded at every computer? This is particularly annoying when you're
visiting a friend in another city for a weekend, and jump onto his
computer to check email or show him how to do something useful, and
think, "I wish I had App X loaded on here right now."
I was disappointed to see Google Suggest being touted as innovative; it
seems to indicate that Google's going to stay within the existing web
realm and not try anything really new (as I read on the web somewhere,
"for a web app, Google Suggest is neat; for a desktop app, it's so
1995"). For all of Google's deep pockets and reputation as innovative,
I expected to see them partner up with a hardware manufacturer and try
something dramatically different.
 
M

Michael Campbell

What's beyond rails?

~>ruby -e 'puts "rails".succ'
railt

Not very sexy, I'll grant you that.
 
J

jason_watkins

Well, I guess I have a dissenting opinion:

1.) You could do photoshop via http.

It may interest you to skim http://opensource.adobe.com/ and realize
that for a couple versions now, photoshop's UI logic has been written
with declairative sublanguages of their own design. It's fairly
straightforward to imagine treating the UI componant as a declariative
document in xml, that specifies logic for preparing computational
transactions that are relayed via http. It's quite accurrate to think
of applications as a giant spreadsheet, where when a cell is notified
it's invalid it dispatches the requests necessary to re-evaluate it's
contents via http post or xmlhttprequest.

The only real barrier to photoshop via http is not http, it's the lack
of a toolkit for the client side portions. Once uppon a time I would
have said there's no high performance client side language, but these
days, I imagine javascript and most browsers html rendering would
actually be fast enough to handle the various graphics rendering tasks
a photoshop like application does.

2.) REST is good.

Statelessness is good. It's not just an annoyance. It's key to why the
web's architecture works so well. Ask yourself why didn't a rich
document format and snazzy browsers get made on top of protocols that
existed at the time, like nttp?

3.) Thick clients and local applications arn't going to go away.

But you will see a transition in that direction, particularly when part
of the applications target usage is communication. S5Presents and the
similar tools are a great example.

4.) So what's next?

I think it's going to be html + javascript. As ugly as that sounds, the
standards process of the web has always been to try to standardize some
sanity after the fact. Pro-active standards making like SVG has yet to
really work. The key is what the browser makers do. And at the moment,
html + javascript is what they do, and I don't foresee that changing
signifigantly for at least 5 years. Flash is the only thing I really
think has a chance of competing. XUL and some of the other things out
there are quite cool on a technical level, and are really better
solutions. But I don't think they can pierce the barrier to entry. Not
unless someone very big forces the issue (read as: Microsoft, US .gov,
etc).

It's a first past the post situation. Html+Javascript made it past the
post. It's a shame that the horse is lame, blind, insane and infested
with fleas, but it's what managed to win.

So for ruby specificly, what I'd like to see personally is some ruby
tools for abstracting over the guts of html + javascript rich clients.
That's certainly no small task. But it's definately what we can see
already happening all over the place. Look at Rails, it gives you
simple ajax without needing to know javascript. Look at ruby web
dialogs. Look at hobix.
 
I

Ilmari Heikkinen

Well, I guess I have a dissenting opinion:

1.) You could do photoshop via http.

It may interest you to skim http://opensource.adobe.com/ and realize
that for a couple versions now, photoshop's UI logic has been written
with declairative sublanguages of their own design. It's fairly
straightforward to imagine treating the UI componant as a declariative
document in xml, that specifies logic for preparing computational
transactions that are relayed via http. It's quite accurrate to think
of applications as a giant spreadsheet, where when a cell is notified
it's invalid it dispatches the requests necessary to re-evaluate it's
contents via http post or xmlhttprequest.

The only real barrier to photoshop via http is not http, it's the lack
of a toolkit for the client side portions. Once uppon a time I would
have said there's no high performance client side language, but these
days, I imagine javascript and most browsers html rendering would
actually be fast enough to handle the various graphics rendering tasks
a photoshop like application does.

The problem is that there are no high-performance libraries for
javascript to munge image data. To do arbitrary edits to an image in
javascript, you need to use a single div(or somesuch element) per
pixel, and do all the calculation in js. Even when it's a small 640x480
image with a single layer, it's going to be 307200 pixels. And that's
going to be slow going in dhtml. The browsers'll probably blow up at
around hundred thousand elements.

When you can run Quake 1 with software rendering in a 320x240 window
with html + javascript, it'll be about fast enough for Photoshop.
(Maybe there is there a port of q1?)

Maybe if there was a fast VM for javascript and direct rendering access
in DOM...

*shrug* who knows what the future brings :)
 
J

jason_watkins

You're missing the architecture. Html+javascript for just the UI layer.
The pixel munging is done on the server in whatever language you
please. Display of results is done by regenerating results on the
server, which the client refreshes.

The html+javascript is just used as a sort of very bare bones 2d scene
graph, like a stripped down gdi or xwindows.

In other words, very similar to google maps.

Like I said, you can think of it abstractly as a spreadsheet where the
client has all the logic for cell interdependancies, but the heavy
calculations for cell re-evaluation is done by posting and getting to
the server.
 
F

Francis Hwang

Intriguing. But leaving aside the issue of whether it would be
economically worth it to offer such software as a web service, I wonder
how good your bandwidth would have to be for this to be sensible with
high-definition images.

For example. I've got Photoshop files on my machine that easily top 100
MB in size, but let's just 100 MB as a representative sample. Let's say
I've got that file living in this web app, and I run a one-pixel
gaussian blur on the whole thing. The web client issues the command to
the server, the server churns on this for a few seconds, and sends the
whole image down the pipe again ... My cable modem at home has a peak
downstream of 5 Mpbs, so it's going to take 20 seconds for me to get
the new image -- and that's assuming the app doesn't have to share the
pipe with any other program like BitTorrent or Acquisition, or, hell,
my mail program, which checks my mail every 5 minutes.

Now, you can try to run some lossless compression on the file to try to
get it down to, say, 10 seconds, but that makes everything more complex
and, oh, sometimes you've got high-entropy images so you've gone to all
the trouble for very little gain.

You can also try to serve only the view of the image that the user
needs right that instant, but that doesn't so much kill the lag as
spread it around. The monitor I use at home, for example, is a 1280x960
resolution with 24-bit color, which means, if my math is right, that
filling my screen requires 3.5 MB worth of pixels. It takes me 0.7
seconds to download that much data if my cable modem's doing well. That
doesn't sound like so much until you realize you have to have that wait
every single time you zoom in or out, or scroll up or down, or even
change the background color from white to transparent. This is similar
to the problem faced by online FPS engines: Since you can't rely on
subsecond response times when you're playing CounterStrike over the
network, the server has to give the client more information than
strictly necessary and trust it to do what's right until the next time
contact is established.

And by the way, if you're dealing with 24-bit images on an app driven
by HTTP+JavaScript, how good is the color fidelity going to be?

I'm happy to do plenty of things on the web, but image processing ain't
one of them.

Francis Hwang
http://fhwang.net/
 
J

jason_watkins

The web client issues the command to the server, the server churns on
this for a few seconds, and sends the
whole image down the pipe again ...
<<<

Still the wrong model. If you were providing Photoshop as a web
application, presumibly you'd be providing storage on the server
itself.

So after your guassian blur filter runs, the app only need to send you
back a ~1000x1000 pixel image that reflects the portion of the image in
display.

If you've worked Satori you can understand how non-destructive editing
with a transaction log works. Your interactions create a log of actions
to complete, and the view is computed on demand... but the entire
calculation need not be done at the full resolution for the entire
file. Logging the actions and generating a preview is sufficient. After
editing is complete and it comes time to save results, then you can
render the transactions at the output resolution.
That
doesn't sound like so much until you realize you have to have that wait
every single time you zoom in or out, or scroll up or down, or even
change the background color from white to transparent.
<<<

Once again, there's no reason to assume our client is brainless. Much
like google maps maintains a tile cache, we could maintain a cache of
layered tilings. Of course it's not going to be as responsive as
communication on a local machine.

My point is merely that http itself is not the limitation.
to the problem faced by online FPS engines: Since you can't rely on
subsecond response times when you're playing CounterStrike over the
network, the server has to give the client more information than
strictly necessary and trust it to do what's right until the next time
contact is established.
<<<

While that's true for modems, it's not necessarily true once you get to
broadband. This is a complex topic with a lot of details. I happen to
have a friend who works professionally in this area. I've played
versions of quake3 that remove all prediction and run at a locked 60hz
rate both for client side graphics and network state update. Latency
isn't necessarily the same as throughput, and human beings are
surprisingly tolerant of latency. You can hit up citeseer for the
relevant basic research done by the .gov in the initial military
simulation days: the net result is that humans can tolerate as much as
150ms of local lag, and that at around 50ms of local lag human
observers begin to have trouble distinguishing between lagged and
unlagged input.

50ms round trip is possible on broadband these days, though not assured
coast to coast in the US or anything.

Anyhow, the point is not weither it's a good idea to do PS as a web
app. I think it's a miserible idea. The point is that html+javascript
rendering is sufficiently fast to serve as the basic drawing layer of a
GUI abstraction. Of course something purpose designed for it would be
better, but with the possible exception of flash, I don't think
anything is going to get enough market penetration: html+javascript are
good enough (barely).
 
F

Francis Hwang

Still the wrong model. If you were providing Photoshop as a web
application, presumibly you'd be providing storage on the server
itself.

So after your guassian blur filter runs, the app only need to send you
back a ~1000x1000 pixel image that reflects the portion of the image in
display.

Okay, but that portion itself represents a significant enough time lag
to cause a serious problem for the person who relies on Photoshop to
get work done. A 1000x1000 image, at 24-bit color, is about a 2.9 MB
image. My downstream is 5 Mbps which means that under peak conditions
it takes about 0.48 seconds to download the view. Not the entire image,
just the view itself.

Also, 1 million pixels is a fairly conservative estimate for the size
of the view you have to deal with. On a full-screen image on my paltry
17" monitor that's about what I get. Apple's 30-inch cinema display
gives you 4 times that many pixels.

I guess the point I'm trying to make is that although it's easy for me
to see certain apps move to web-land, Photoshop isn't one of them. When
you're a serious Photoshop user, everything you do sloshes around a lot
of data, so the network itself becomes an obstacle, and until you solve
the last-mile bandwidth problem you can't deliver something like this
as a web app to the home user.
to the problem faced by online FPS engines: Since you can't rely on
subsecond response times when you're playing CounterStrike over the
network, the server has to give the client more information than
strictly necessary and trust it to do what's right until the next time
contact is established.
<<<

While that's true for modems, it's not necessarily true once you get to
broadband. This is a complex topic with a lot of details. I happen to
have a friend who works professionally in this area. I've played
versions of quake3 that remove all prediction and run at a locked 60hz
rate both for client side graphics and network state update. Latency
isn't necessarily the same as throughput, and human beings are
surprisingly tolerant of latency. You can hit up citeseer for the
relevant basic research done by the .gov in the initial military
simulation days: the net result is that humans can tolerate as much as
150ms of local lag, and that at around 50ms of local lag human
observers begin to have trouble distinguishing between lagged and
unlagged input.

50ms round trip is possible on broadband these days, though not assured
coast to coast in the US or anything.

Maybe you know more about this than I do, but how much data does a FPS
have to send out and receive, anyway? It's been my impression that a
FPS server sets a lot of the original model when the game sets up, and
then sends the clients a continuous stream of location and status
updates. I imagine the size of these updates is measurable in bytes or
kilobytes.
Anyhow, the point is not weither it's a good idea to do PS as a web
app. I think it's a miserible idea. The point is that html+javascript
rendering is sufficiently fast to serve as the basic drawing layer of a
GUI abstraction. Of course something purpose designed for it would be
better, but with the possible exception of flash, I don't think
anything is going to get enough market penetration: html+javascript are
good enough (barely).

Yeah, I agree with you there. Applications like Google Maps make the
case for a lot of really astounding web apps, possible in the
near-term.

Francis Hwang
http://fhwang.net/
 
B

Bill Kelly

From: "Francis Hwang said:
Okay, but that portion itself represents a significant enough time lag
to cause a serious problem for the person who relies on Photoshop to
get work done. A 1000x1000 image, at 24-bit color, is about a 2.9 MB
image. My downstream is 5 Mbps which means that under peak conditions
it takes about 0.48 seconds to download the view. Not the entire image,
just the view itself.

That's 5 mega bits, not bytes right? So more like 4.6 seconds,
at full speed?

With the right kind of compression, could send a lot less data.

[...]
Regarding, "and human beings are surprisingly tolerant of latency",
it's hard to express how funny this is in the context of games like
quake. On the one hand it's true the game is quite playable with
and order-of-magnitude difference in latency between players. But,
oh, the griping, whining, and complaining that ensues! :)

Here are the round-trip ping times for everyone connected to my
quake2 server at the moment (in milliseconds):

29 Raditz
43 ompa
56 Ponzicar
58 SKULL CRUSHER
59 [EF] 1NUT
66 p33p33
67 HyDr0
70 HamBurGeR
74 iJaD
93 Lucky{MOD}
140 ToxicMonkey^MZC
145 Krazy
164 nastyman
171 Bellial
242 mcdougall_2
372 Demon

In a game like quake, you can feel the difference in latency
all the way down to LAN speeds. A couple players, not playing
at the moment, live near the server and even have sub-20
millisecond pings! Which is pretty close to LAN, but
experienced players still report having to adjust their play
style between 20 msec (lives near the server) and <10 msec
(LAN play) latency.
[...] how much data does a FPS
have to send out and receive, anyway? It's been my impression that a
FPS server sets a lot of the original model when the game sets up, and
then sends the clients a continuous stream of location and status
updates. I imagine the size of these updates is measurable in bytes or
kilobytes.

Quake2 data rate for broadband players is in the range of
8 to 15 kbytes per second.


... In a desperate attempt to say something on-topic:
I use ruby to write all my quake2 admin scripts.
I haven't tried Rails yet. The other night I was able
to write a server-status CGI script from scratch in a
couple hours, including spawning multiple threads to
poll all servers concurrently. (So amazingly easy in
ruby.)
http://tastyspleen.net/quake/servers/list.cgi

I considered trying Rails but had no need for a
database to be involved. . . . Someday I'll try Rails
- maybe to keep track of all players high scores
across all game levels.

OK not quite on-topic... I tried <grin>


Regards,

Bill
 
T

Tom Copeland

... In a desperate attempt to say something on-topic:
I use ruby to write all my quake2 admin scripts.

First person shooters are _always_ on topic :)

Yours,

tom
 
J

jason_watkins

Maybe you know more about this than I do, but how much data does a FPS
have to send out and receive, anyway?
<<<

Well, the theoretical limit is always just input itself. Mose skew is
often within single byte values, but even if you bumped it up to 16bit
in x,y you're at 4 bits per sample. Within the sample we might have an
upper bound of 10 key up/down events, so figure something like 16 bytes
per input timeslice. The server can simply act as a relay for this
data.

So the lower limit is something like 16 * numplayers * hz. In other
words, you pay more for UPD packet headers.

For reason of cheat protection, few games use this model however (chris
taylor is the only person I know who's a real big fan of it).

It's more common for games to instead simulate on the server and then
stream the state onto the client. The quake engines themsevles went
back and forth on the details of this, but eventually settled on simply
streaming serially id'd state deltas from the server which the client
acknowledges. The server stores a sync point for each client, and
journals the simulation state from the point of last acknowledgement.
Each client can have it's own rate, and the server sends a cummulative
update from the sync point to the current server state. The real cost
of this model is memory, which is why quake3 won't scale to 100's of
players. There's also disadvantages to it's failure mode: the size of a
future packet grows as packets are dropped. But, a client can get back
in sync with a single such packet, so it proves itself fairly robust.

If you read the literature, you can see the quake3 model + the client
extrapolation code is very close to a well known optimistic discrete
event simulation algorithm (timewarp).

Interestingly, in the many models the guy I know has played with, one
that works the best is instead bucketing state slices to be the same
for all clients, and echos the previous state with each state it sends.
Ie sending A, AB, BC, CD. Single packet drops are almost never noticed.
Loss of sync greater than a single slice are handled out of band by the
same code that handles a player jumping into the middle of a simulation
state. It's far less memory demanding, so it scales quite well, and in
practice ends up being quite competative with the quake3 model.

It's been a while since I looked at the actual rates, but you I do
recall that when I'd lock up counterstrike to send input at 60hz and
request server frames at 60hz it'd end up at around 2kbyte/sec download
nominal, with bursts up to mabye 6kbyte/sec in rare moments. Input
upload is always miniscule.
 
J

jason_watkins

Regarding, "and human beings are surprisingly tolerant of latency",
it's hard to express how funny this is in the context of games like
quake.
<<<

Well, don't get me started, but quake's net code has some signifgant
implimentation and algorithmic issues, particularly quake3's prediction
handling. One of the things I took away from the darpa papers I skimmed
is that what humans really sense isn't just the absolute latency, the
variance matters as well. If you give someone a consistant local lag
they stop noticing it quite quickly. But if you factor in prediction to
the control system it starts becoming complicated quickly. Suddenly the
delay factor the brain has to anticpate is changing depending on
several different factors that are outside your awareness.

Someone used to have some java apps up where you could play with this
stuff yourself. It's quite interesting... a similar experience to doing
blind testing of audio equipment... a little uncomfortible when you
have some of your assumptions challanged.
 
B

Bill Kelly

From: said:
Well, don't get me started, but quake's net code has some signifgant
implimentation and algorithmic issues, particularly quake3's prediction
handling. One of the things I took away from the darpa papers I skimmed
is that what humans really sense isn't just the absolute latency, the
variance matters as well. If you give someone a consistant local lag
they stop noticing it quite quickly. But if you factor in prediction to
the control system it starts becoming complicated quickly. Suddenly the
delay factor the brain has to anticpate is changing depending on
several different factors that are outside your awareness.

Someone used to have some java apps up where you could play with this
stuff yourself. It's quite interesting... a similar experience to doing
blind testing of audio equipment... a little uncomfortible when you
have some of your assumptions challanged.

I haven't read the darpa papers; sounds interesting. However,
I don't think that the hypothesis that humans can get used to
a consistent local lag--which I completely agree with--tells
the whole story in a hyper real-time environment like quake.

Quake2 seems to do a good job providing a consistent local lag,
provided there's no packet loss. (Q2 doesn't handle packet
loss very smoothly.) I've played quake2 online pretty much
daily for seven years. The part of the story I think isn't
being told by humans being able to stop noticing consistent
local lag, is that the more latency you have--even if you're
used to it--the harder it is to beat players with low latency
connections.

It's fun when we get to see a player who's been on dialup for
seven years finally switch to broadband. It's rare to have
such an extreme example as we had with a player a couple
months ago. He was very skilled, and it turned out he lives
near the server. So he'd been playing quake2 on dialup for
seven years, was utterly adjusted to compensating for the
dialup lag (200+ milliseconds). He was skilled enough to
sometimes beat good players with low-latency connections.
Recently he got DSL, and has one of the lowest pings on the
server now (about 20 msec.) It took him a few weeks to
adjust. Now he just cleans up - he's now one of the top few
players on the server (each of whom has exceptional skills
and low latency.)

It's rare to get such a pure demonstration of how latency
is a handicap to even the most skilled players who've had
years to adjust to it. Our brains may do well at adjusting
to lag so that we stop noticing it, but we don't seem to
be able to truly compensate for it in a way that puts us on
an even playing field against non-lagged players.


(Ob. ruby: Here's a routine that'll work from the command
line or as a require'd file that will query a quake2 server
for its status - including the ping times of all the players
connected, or send remote console commands to the server if
you have the rcon password.
http://tastyspleen.net/quake/servers/q2cmd-rb.txt
E.g. q2cmd tastyspleen.net 27910 status )


Regards,

Bill
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,772
Messages
2,569,593
Members
45,104
Latest member
LesliVqm09
Top