OT: How to tell an HTTP client to limit parallel connections?

G

Grant Edwards

Yes, this off-topic, but after a fair amount of Googling and searching
in the "right" places, I'm running out of ideas.

I've got a very feeble web server. The crypto handshaking involved in
opening an https: connection takes 2-3 seconds. That would be fine if
a browser opened a single connection and then sent a series of
requests on that connection to load the various elements on a page.

But that's not what browsers do. They all seem to open whole handful
of connections (often as many as 8-10) and try to load all the page's
elements in parallel. That turns what would be a 3-4 second page load
time (using a single connection) into a 20-30 second page load time.
Even with plaintext http: connections, the multi-connection page load
time is slower than the single-connection load time, but not by as
large a factor.

Some browsers have user-preference settings that limit the max number
of simultaneous connections to a single server (IIRC the RFCs suggest
a max of 4, but most browsers seem to default to a max of 8-16).

What I really need is an HTTP header or meta-tag or something that I
can use to tell clients to limit themselves to a single connection.

I haven't been able to find such a thing, but I'm hoping I've
overlooked something...
 
D

donarb

Yes, this off-topic, but after a fair amount of Googling and searching
in the "right" places, I'm running out of ideas.

I've got a very feeble web server. The crypto handshaking involved in
opening an https: connection takes 2-3 seconds. That would be fine if
a browser opened a single connection and then sent a series of
requests on that connection to load the various elements on a page.

But that's not what browsers do. They all seem to open whole handful
of connections (often as many as 8-10) and try to load all the page's
elements in parallel. That turns what would be a 3-4 second page load
time (using a single connection) into a 20-30 second page load time.
Even with plaintext http: connections, the multi-connection page load
time is slower than the single-connection load time, but not by as
large a factor.

Some browsers have user-preference settings that limit the max number
of simultaneous connections to a single server (IIRC the RFCs suggest
a max of 4, but most browsers seem to default to a max of 8-16).

What I really need is an HTTP header or meta-tag or something that I
can use to tell clients to limit themselves to a single connection.

I haven't been able to find such a thing, but I'm hoping I've
overlooked something...

There's an Apache module called mod_limitipconn that does just what you are asking. I've never used it and can't vouch for it. I'm not aware of anything like this for other servers like Nginx.

http://dominia.org/djao/limitipconn2.html
 
S

Skip Montanaro

What I really need is an HTTP header or meta-tag or something that I
can use to tell clients to limit themselves to a single connection.

I haven't been able to find such a thing, but I'm hoping I've
overlooked something...

That will only go so far. Suppose you tell web browsers "no more than
3 connections", then get hit by 30 nearly simultaneous, but separate
clients. Then you still wind up allowing up to 90 connections.

There should be a parameter in your web server's config file to limit
the number of simultaneously active threads or processes. It's been a
long time for me, and you don't mention what brand of server you are
running, but ISTR that Apache has/had such parameters.

(Also, while this is off-topic for comp.lang.python, most people here
are a helpful bunch, and recognizing that it is off-topic, will want
to reply off-list. You don't give them that option with
"(e-mail address removed)" as an email address.)

Skip
 
G

Grant Edwards

That will only go so far. Suppose you tell web browsers "no more than
3 connections", then get hit by 30 nearly simultaneous, but separate
clients.

In practice, that doesn't happen. These servers are small, embedded
devices on internal networks. If it does happen, then those clients
are all going to have to queue up and wait.
Then you still wind up allowing up to 90 connections.

The web server is single threaded. It only handles one connection at
a time, and I think the TCP socket only queues up a couple. But that
doesn't stop browsers from trying to open 8-10 https connections at a
time (which then eventually get handled serially).
There should be a parameter in your web server's config file to limit
the number of simultaneously active threads or processes.

The server is single-threaded by design, so it is not capable of
handling more than one connection at a time. The connections are
never actually in "parallel" except in the imagination of the browser
writers.
It's been a long time for me, and you don't mention what brand of
server you are running, but ISTR that Apache has/had such parameters.

FWIW, it's an old version of the GoAhead web server:

http://embedthis.com/products/goahead/
(Also, while this is off-topic for comp.lang.python, most people here
are a helpful bunch, and recognizing that it is off-topic, will want
to reply off-list. You don't give them that option with
"(e-mail address removed)" as an email address.)

Yea, trying to hide e-mail addresses from automated spammers is
probably futile these days. I'll have to dig into my slrn config
file.
 
C

Chris Angelico

I've got a very feeble web server. The crypto handshaking involved in
opening an https: connection takes 2-3 seconds. That would be fine if
a browser opened a single connection and then sent a series of
requests on that connection to load the various elements on a page.

But that's not what browsers do. They all seem to open whole handful
of connections (often as many as 8-10) and try to load all the page's
elements in parallel.

Are you using HTTP 1.1 with connection reuse? Check that both your
client(s) and your server are happy to use 1.1, and you may be able to
cut down the number of parallel connections.

Alternatively, since fixing it at the browser seems to be hard, can
you do something ridiculously stupid like... tunnelling insecure HTTP
over SSH? That way, you establish the secure tunnel once, and
establish a whole bunch of connections over it - everything's still
encrypted, but only once. As an added bonus, if clients are requesting
several pages serially (user clicks a link, views another page), that
can be done on the same connection as the previous one, cutting crypto
overhead even further.

ChrisA
 
G

Grant Edwards

Are you using HTTP 1.1 with connection reuse?

Yes. And several years ago when I first enabled that feature in the
server, I verified that some browsers were sending multiple requests
per connection (though they still often attempted to open multiple
connections). More recent browsers seem much more impatient and are
determined to open as many simultaneous connections as possible.
Check that both your client(s) and your server are happy to use 1.1,
and you may be able to cut down the number of parallel connections.
Alternatively, since fixing it at the browser seems to be hard, can
you do something ridiculously stupid like... tunnelling insecure HTTP
over SSH?

Writing code to implement tunnelling via the ssh protocol is probably
out of the question (resource-wise).

If it were possible, how is that supported by browsers?
 
N

Nick Cash

What I really need is an HTTP header or meta-tag or something that I can use to tell clients to limit themselves to a single connection.

I don't think such a thing exists... but you may be able to solve this creatively:

A) Set up a proxy server that multiplexes all of the connections into a single one. A reverse proxy could even handle the SSL and alleviate the load on the embedded server. Although it sounds like maybe this isn't an option for you?

OR

B) Redesign the page it generates to need fewer requests (ideally, only one): inline CSS/JS, data: url images, etc. It's not the prettiest solution, but it could work.

-Nick Cash
 
I

Ian Kelly

Yes, this off-topic, but after a fair amount of Googling and searching
in the "right" places, I'm running out of ideas.

I've got a very feeble web server. The crypto handshaking involved in
opening an https: connection takes 2-3 seconds. That would be fine if
a browser opened a single connection and then sent a series of
requests on that connection to load the various elements on a page.

But that's not what browsers do. They all seem to open whole handful
of connections (often as many as 8-10) and try to load all the page's
elements in parallel. That turns what would be a 3-4 second page load
time (using a single connection) into a 20-30 second page load time.
Even with plaintext http: connections, the multi-connection page load
time is slower than the single-connection load time, but not by as
large a factor.

Some browsers have user-preference settings that limit the max number
of simultaneous connections to a single server (IIRC the RFCs suggest
a max of 4, but most browsers seem to default to a max of 8-16).

What I really need is an HTTP header or meta-tag or something that I
can use to tell clients to limit themselves to a single connection.

I haven't been able to find such a thing, but I'm hoping I've
overlooked something...

No such header exists, that I'm aware of. The RFC simply recommends
limiting client connections to 2 per user, but modern browsers no
longer follow that recommendation and typically use 4-6 instead.

Do you really need to send all the page resources over HTTPS? Perhaps
you could reduce some of the SSL overhead by sending images and
stylesheets over a plain HTTP connection instead.
 
G

Grant Edwards

Yes. And several years ago when I first enabled that feature in the
server, I verified that some browsers were sending multiple requests
per connection (though they still often attempted to open multiple
connections). More recent browsers seem much more impatient and are
determined to open as many simultaneous connections as possible.

Yeah, but at least it's cut down from one connection per object to
some fixed number. But you've already done that.
Writing code to implement tunnelling via the ssh protocol is probably
out of the question (resource-wise).

If it were possible, how is that supported by browsers?

You just set your hosts file to point the server's name to localhost
[...]

Ah, I see.

All I have control over is the server. I have no influence over the
client side of things other than what I can do in the HTTP server.
 
G

Grant Edwards

I don't think such a thing exists...

Yea, that's pretty much the conclusion I had reached.
but you may be able to solve this creatively:

A) Set up a proxy server that multiplexes all of the connections into
a single one. A reverse proxy could even handle the SSL and alleviate
the load on the embedded server. Although it sounds like maybe this
isn't an option for you?

Indeed it isn't. These "servers" are an embedded devices that are
installed on customer-owned networks where I can do nothing other than
what can be accopmplished by changes to the firmware on the server.
B) Redesign the page it generates to need fewer requests (ideally,
only one): inline CSS/JS, data: url images, etc. It's not the
prettiest solution, but it could work.

That is something I might be able to do something about. I could
probably add support to the server for some sort of server-side
include feature. [or, I could pre-process the html files with
something like m4 before burning them into ROM.] That would take care
of the css and js nicely. Inlining the images would take a little
more work, but should be possible as well.

I have vague memories of inline image data being poorly supported by
browswers, but that was probably many years ago...

Thanks for the suggestion!
 
G

Grant Edwards

Do you really need to send all the page resources over HTTPS?

Probably not, but it's not my decision. The customer/client makes
that decision.
Perhaps you could reduce some of the SSL overhead by sending images
and stylesheets over a plain HTTP connection instead.

In theory, that could work, but some customers require use of
encryption.
 
G

Grant Edwards

Hmm. Then the only way I can think of is a reverse proxy that can
queue, handle security, or whatever else is necessary. Good luck. It's
not going to be easy, I think. In fact, easiest is probably going to
be beefing up the hardware.

Oooh.... crazy thought just struck me. What's your source of entropy?
Is it actually the mathematical overhead of cryptography that's taking
2-3 seconds,

Yes. AFAICT, it is. Some of the key-exchange options are pretty
taxing. I can speed things up by about a factor of 4 by disabling the
key-exchange algorithms that have the highest overhead, but those are
the algorithms that the SSL clients seem to prefer. I'm reluctant to
force them further down their preference list, lest I end up not being
able to support some clients.
or are your connections blocking for lack of entropy?

Nope. The cyrpto libraries we're using don't do that. I'm not
entirely happy with the entropy generation used. I wish I had more
sources of "real" randomness, but at least they don't block.
You might be able to add another source of random bits, or possibly
reduce security a bit by allowing less-secure randomness from
/dev/urandom.

It's not Unix-like OS, but that's more or less what's happening.
 
C

Chris Angelico

Nope. The cyrpto libraries we're using don't do that. I'm not
entirely happy with the entropy generation used. I wish I had more
sources of "real" randomness, but at least they don't block.


It's not Unix-like OS, but that's more or less what's happening.

And pop goes another theory. I knew you'd know what I mean by
/dev/urandom, regardless of the name you'd actually reference it by.
Oh well, worth a shot.

ChrisA
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,901
Latest member
Noble71S45

Latest Threads

Top