Memory usage with many w3wp.exe processes

A

AN

Greetings,

We make an ASP.NET web application and we host it for our customers. We
have provisioned hardware and hope to be able to service around 200
customers on this hardware. The web servers are in a stateless farm and
have 2 GB of RAM. We are using ASP.NET 1.1 when using a dedicated
application pool for each virtual directory. Each customer gets their own
virtual directory and copy of the ASP.NET dll in their bin folder, which
creates a w3wp.exe process for each customer. The problem is that each
w3wp.exe takes from 40 MB to 180 MB of RAM. If the average is 128 MB, that
means we'll only be able to service something like 15 customers on this
hardware before we run out of RAM. Adding more RAM to the tune of a
gigabyte per every 7 customers is not a viable solution. We have used the
CLRProfiler.exe tool and we can see that string variables are being used a
good deal, but it does not appear that we are doing anything excessive in
this regard. We are using the StringBuilder class whenever lots of string
concatenation is required.

My questions are as follows:

1) Will the CLR release memory as we approach the limits and the OS begins
to ask for more memory for new w3wp.exe processes? Could it be that these
larger w3wp.exe processes are large simply because the CLR is being a bit
lazy and not releasing memory from the heap? If there were need for more
w3wp.exe processes, would the CLR trim these processes back and make room?

2) The application caches ADO.NET datatables in the HttpContext Cache
object. Some of these datatables are somewhat large (the largest is about 6
MB if the data were persisted as an XML stream). In the case of one
customer the total size of these cached datatables is 9.5 MB and their
w3wp.exe is up at 186 MB, so if this is the culprit, why would 9.5 MB of
cached data puff up to 186 MB?

3) We have configured our Application Pools as the default settings, which
is just to Recycle worker processes every 1740 minutes, Shutdown worker
processes after being idle for 20 minutes, and Request queue limit of 4000.
It seems that using the Memory recycling features is not what we want
because it winds up dumping the entire w3wp.exe process which then causes
that dreaded 10-second .NET grind at the next page request, and if we tried
to do it this way it seems to me that we'd wind up dumping w3wp.exe
processes before they can even get fully loaded. Does anyone have any tips
for hosting large numbers of Application Pools?

I look forward to any informed replies!

Thanks in advance, AN
 
J

Juan T. Llibre

Hi, AN.

First of all, I'd suggest that you post this at

microsoft.public.inetserver.iis

news://msnews.microsoft.com/Microsoft.public.inetserver.iis

which is a more likely place to get answers to complex IIS questions like yours.

That having been said, here's a couple of comments :

1. Is it entirely necessary to run *each* application in its own App Pool ?

Couldn't you run some sort of resource usage test on your apps,
to determine whether the application merits being completely isolated
from the other applications ?

IIS 6.0 is quite sturdy, and can take advantage of its recycling features
to prevent process from hanging indefinitely. In that situation, unless all
your applications are large, you could easily pool your less demanding
applications into a common Application Pool with no objectionable consequences.

Just by doing that, you can probably increase, substantially,
the number of applications which you can run per server.

2. Take a look at this white paper :

http://www.microsoft.com/technet/prodtechnol/windowsserver2003/technologies/webapp/iis/appisoa.mspx

It's a fairly complete guide to configuring Application Isolation in IIS 6.0.

You probably know a lot of what's in it,
but I bet you can pick a tip or two from it.

Take a particularly close look at the "Performance and Scale" section,
where the paper explains how to use a shared desktop in order to
configure more than 60 Application Pools effectively.

From the paper :

"IIS 6.0 has been tested on a well-configured mainframe server running up
to 2,000 concurrent worker processes, each serving one application pool,
but not using unique identities.

In practice, a design of up to 500 simultaneous application pools is achievable."

Test what the white paper recommends.
If that is, still, not enough, I'd recommend opening a Support ticket.

Your case certainly merits using one.

best regards,
 
S

Scott Allen

Hey AN:

Have you considered putting multiple directories into the same
application pool? ASP.NET AppDomains scale much better than Windows
processes. I'd think this would be a viable approach since you are
executing your own code.
 
A

AN

Thanks Juan and Scott for the thoughtful replies - I appreciate it!

For the moment we are moving forward with using only a single App Pool for
each web server. While this does not offer the isolation of using a
dedicated App Pool for each customer, it saves at least 26 MB of RAM for
each installation of our product. I arrived at the 26 MB savings from doing
a simple test where I created a new virtual directory and App Pool and then
placed only the simplest .aspx file in it (just a Response.Write("hello"))
and then you can see that pointing a browser at it eats up 26 MB
immediately. Interestingly, if you only place static HTML files in that
directory, it only eats up 6 MB. So 6 MB is for the App Pool itself, and 20
MB is for an instance of ASP.NET it would seem.

Anyway, this is doing much better because we are able to host 20 to 25
customers in this configuration and use about 1.3 GB of RAM on each web
server in the web farm. So my next thought was that we could increase the
RAM in the web servers from 2 GB to 4 GB and perhaps get this solution up to
60 or 70 customers. But alas, we are using Windows Server 2003 Web Edition
which has a limit of 2 GB. I had never realized this limitation before. I
think it is quite lame that the web edition cannot really scale out due to
the fact that ASP.NET requires so much RAM. Think about it - if ASP.NET
requires up to or more than 2 GB of RAM on a web server in order to host a
lot of traffic on a given application such as ours, this means that using a
web farm strategy and scaling out is not really possible with Web Edition.
You have to be able to scale up a bit more than 2 GB in order to scale out
effectively. In speaking with the ASP.NET support worker at Microsoft, I
asked him to pass along my suggestion that the Web Edition be renamed to the
"Personal Web Edition" because it is really quite a toy if it can't make use
of more than 2 GB of RAM, particularly when you consider the fact that the
Microsoft web development technology of choice (ASP.NET) is RAM hungry.

I still don't have an answer to question number 1 that I asked originally.
This becomes even more important now that we are facing this dreaded 2 GB
limit on memory. I think we will probably reformat these web servers and
use Standard Edition so that we can increase them to 4 GB of RAM each, which
is fairly inexpensive. It would be less expensive if we hadn't purchased
that stupid Web Edition first, however... So now, is there anyone out there
that can answer the question? If we continue to install more and more
instances of our application and the RAM usage goes up to and over the 2 GB
mark, will the CLR release some memory that it does not really need? If so,
will that mean tremendous performance hits? I realize that noone can answer
this question scientifically without knowing about our application, but any
general insights or experience is certainly welcome.

FYI, I have not gotten any response from the microsoft.public.inetserver.iis
newsgroup.

Thanks in advance!
 
S

Scott Allen

Hi AN:
But alas, we are using Windows Server 2003 Web Edition
which has a limit of 2 GB. I had never realized this limitation before. I
think it is quite lame that the web edition cannot really scale out due to
the fact that ASP.NET requires so much RAM.

Win2003 WE was never intended for heavy duty hosting like yours, thus
the low pricing and RAM limitation. Still, I can appreciate your
frustration.
I still don't have an answer to question number 1 that I asked originally.
This becomes even more important now that we are facing this dreaded 2 GB
limit on memory. I think we will probably reformat these web servers and
use Standard Edition so that we can increase them to 4 GB of RAM each, which
is fairly inexpensive. It would be less expensive if we hadn't purchased
that stupid Web Edition first, however... So now, is there anyone out there
that can answer the question? If we continue to install more and more
instances of our application and the RAM usage goes up to and over the 2 GB
mark, will the CLR release some memory that it does not really need? If so,
will that mean tremendous performance hits? I realize that noone can answer
this question scientifically without knowing about our application, but any
general insights or experience is certainly welcome.

I'd use CLRProfiler again and drill into the GEN 2 area of the heap.
These are objects that, for one reason or another, are surviving
garbage collections and sticking around a long time. Perhaps they
don't need to.

In general both the CLR and OS will reclaim memory unused memory, but
also (speaking generally), server applications are reluctant to give
up 'unused space' because it is likely they will need the space in the
future.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top