size of .dlls

M

Mark

Is there a point when the size of a asp.net web project is too big ... and
it really should be broken down into multiple projects? For example, there
is a significant difference in .dll size between a web site that has 10
codebehind pages v. 50 codebehind pages v 150 codebehind pages.

Is there a potential for performance problems at any point? Or do the
advantages of a single project typically outweigh the advantages of having
things broken up?

Thanks in advance.
Mark
 
R

Rajesh.V

Breaking your huge dll into multiple ones make sense. Everytime u change
even one page, the whole dll gets compiled, and app gets restarted. Also if
u split it into multiple ones, if one dll hangs it wont effect the other.
 
A

Alvin Bruney

Technically, you should construct your assembly so that it can fit on a page
of memory. With this simple goal in mind, when the cpu executes your
assembly or code from your assembly it does not have to go to main memory to
fetch the code because the code is so small that it fits into L2 cache. You
avoid soft page faults. Otherwise, if your assembly is too big it then needs
to fit on several pages in memory. Everytime the cpu requests instructions
which aren't on that page, you suffer a soft page fault while the cpu loads
the appropriate page and so on and so forth.

If your assembly is bigger so that it cannot fit into L2 cache, you will
suffer harder page faults as the cpu must now load the appropriate page from
RAM which is infinitely slower than L2 cache. Gowd forbid that your assembly
does not fit into ram because it is so big then it needs to be loaded off of
hard disk, and you should consider taking up some other profession at this
point because your customers will not be happy.

Measure the size of your assembly after a release build and figure out from
the hardware section, how big your L2 cache is and you will have a very good
idea where you are and what you need to do to achieve the appropriate
performance. FYI, microsoft prefers to optimize their products for size
instead of speed for that very reason.

regards
 
A

Alvin Bruney

No, ask yourself where does the jitted code sit? Where ever it sits, it will
pile up and page fault. The principle doesn't change.
 
M

Mark

This is intruiging. How then, do you balance this with the practicality of
keeping conceptual "applications" together? For example, if you have seven
ASP.NET applications that utilize the same security method, similar
connection strings/global variables, and identical look/feel, and you break
them up into seven ASP.NET projects to keep their codebehind .dll under the
size of L2 cache, you suddenly have 7 identical web.config files, 7
identical global.asax.cs files with forms authentication code, 7 identical
sets of user controls, etc. Not to mention you can't share your session
variables because you have 7 different virtual directories.

You can still leverage custom controls or class libraries you've built
separately .... but looking at those "issues" listed above, I feel torn ...

I would LOVE some guidance on how to deal with these issues. Thanks in
advance!

Mark
(e-mail address removed)

message
No, ask yourself where does the jitted code sit? Where ever it sits, it will
pile up and page fault. The principle doesn't change.

-----------
Got TidBits?
Get it here: www.networkip.net/tidbits

Alvin i thought this is true when we were dealing with old binary dll's.
But here jit compiler compiles that block of code(currently reqd) and
executes. Page fault in .net translates to hitting code not compile/not in
cache by
jit/clr. So however big the assy, the full code never gets compiled and
loaded. So only the MS will know a good ans to this.

message

Technically, you should construct your assembly so that it can fit on a page
of memory. With this simple goal in mind, when the cpu executes your
assembly or code from your assembly it does not have to go to main memory to
fetch the code because the code is so small that it fits into L2 cache. You
avoid soft page faults. Otherwise, if your assembly is too big it then
needs to fit on several pages in memory. Everytime the cpu requests
instructions which aren't on that page, you suffer a soft page fault while
the cpu loads the appropriate page and so on and so forth.

If your assembly is bigger so that it cannot fit into L2 cache, you will
suffer harder page faults as the cpu must now load the appropriate page from
RAM which is infinitely slower than L2 cache. Gowd forbid that your assembly
does not fit into ram because it is so big then it needs to be loaded off of
hard disk, and you should consider taking up some other profession at this
point because your customers will not be happy.

Measure the size of your assembly after a release build and figure out from
the hardware section, how big your L2 cache is and you will have a very
good idea where you are and what you need to do to achieve the appropriate
performance. FYI, microsoft prefers to optimize their products for size
instead of speed for that very reason.

regards

-----------
Got TidBits?
Get it here: www.networkip.net/tidbits

Is there a point when the size of a asp.net web project is too big ... and
it really should be broken down into multiple projects? For example, there
is a significant difference in .dll size between a web site that has 10
codebehind pages v. 50 codebehind pages v 150 codebehind pages. Is there a
potential for performance problems at any point? Or do the advantages of a
single project typically outweigh the advantages of having things broken up?

Thanks in advance.
Mark
 
A

Alvin Bruney

For this conceptual grouping you can either add the applications to the app
domain or you can group the applications into one application pool depending
on whether you want to do the work programmatically or administratively.
With seven different applications, the only way to share information is to
push that data thru a process boundary either using remoting or a
webservice. For security reasons, the data inside applications and within
the context from which they run must remain isolated.

Custom controls or libraries doesn't solve the problem of marshalling data
between these process boundaries either. The basics is, if it is in another
process, you need to share data in a specified way either thru marshalling
or webservices or writing files to disk.

regards
 
M

Mark

Alvin,

Thank you. Reading your first sentence with two options, are you in other
words implying that my options are:

1. Creating a single VS.NET project that all share the same files.
2. Creating multiple VS.NET projects that potentially have overlapping
files, and/or share information with each other via remoting or web
servcies.

Correct?

Secondly, doesn't this all make a strong argument with going with the first
simply because of all the additional development time/headache required to
set up the second option?

Thanks again. Your help is thoroughly appreciated.

Mark Field
(e-mail address removed)
 
A

Alvin Bruney

Yes, I'd go with option one for sure. If you are going to have such a strong
dependency on other applications, it would be a good idea to make these
related applications part of the same solution like you are saying. That
way, there is no marshalling and access to data occurs on the same calling
thread which is quick and cost efficient.
hth
--


-----------
Got TidBits?
Get it here: www.networkip.net/tidbits
Mark said:
Alvin,

Thank you. Reading your first sentence with two options, are you in other
words implying that my options are:

1. Creating a single VS.NET project that all share the same files.
2. Creating multiple VS.NET projects that potentially have overlapping
files, and/or share information with each other via remoting or web
servcies.

Correct?

Secondly, doesn't this all make a strong argument with going with the first
simply because of all the additional development time/headache required to
set up the second option?

Thanks again. Your help is thoroughly appreciated.

Mark Field
(e-mail address removed)

Alvin Bruney said:
For this conceptual grouping you can either add the applications to the app
domain or you can group the applications into one application pool depending
on whether you want to do the work programmatically or administratively.
With seven different applications, the only way to share information is to
push that data thru a process boundary either using remoting or a
webservice. For security reasons, the data inside applications and within
the context from which they run must remain isolated.

Custom controls or libraries doesn't solve the problem of marshalling data
between these process boundaries either. The basics is, if it is in another
process, you need to share data in a specified way either thru marshalling
or webservices or writing files to disk.

regards
--


-----------
Got TidBits?
Get it here: www.networkip.net/tidbits
Mark said:
This is intruiging. How then, do you balance this with the
practicality
of
keeping conceptual "applications" together? For example, if you have seven
ASP.NET applications that utilize the same security method, similar
connection strings/global variables, and identical look/feel, and you break
them up into seven ASP.NET projects to keep their codebehind .dll
under
the
size of L2 cache, you suddenly have 7 identical web.config files, 7
identical global.asax.cs files with forms authentication code, 7 identical
sets of user controls, etc. Not to mention you can't share your session
variables because you have 7 different virtual directories.

You can still leverage custom controls or class libraries you've built
separately .... but looking at those "issues" listed above, I feel
torn
...
I would LOVE some guidance on how to deal with these issues. Thanks in
advance!

Mark
(e-mail address removed)

message
No, ask yourself where does the jitted code sit? Where ever it sits,
it
will
pile up and page fault. The principle doesn't change.

-----------
Got TidBits?
Get it here: www.networkip.net/tidbits

Alvin i thought this is true when we were dealing with old binary dll's.
But here jit compiler compiles that block of code(currently reqd) and
executes. Page fault in .net translates to hitting code not
compile/not
in
 
R

Rajesh.V

Alvin i thought this is true when we were dealing with old binary dll's. But
here jit compiler compiles that block of code(currently reqd) and executes.
Page fault in .net translates to hitting code not compile/not in cache by
jit/clr. So however big the assy, the full code never gets compiled and
loaded. So only the MS will know a good ans to this.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top