Java WebSphere webserver on iSeries / other platforms

P

Pete

Hi,

I'm on a site where a third party application has been put on
WebSphere 4.0.5 on an iSeries 820/V5R2. We have had horrendous
performance / stability problems even with 10 users and even IBM
cannot seem to help.

What I am after is opinions / a comparason someone has done (web
link?) / docco about Java webservers performance looking at iSeries,
Intel, Sparcs etc.

Anything would be a great help.

Pete.
 
S

Scorpio

Pete said:
Hi,

I'm on a site where a third party application has been put on
WebSphere 4.0.5 on an iSeries 820/V5R2. We have had horrendous
performance / stability problems even with 10 users and even IBM
cannot seem to help.

What problems ? Can you be more specific ?

Scorpio.
 
P

Pete

Scorpio,

The app is a basic war file, JTOpen to the client database, and it's
been tested / is stable on other servers and platforms os's, but not
WebSphere on iSeries.

We get sluggish performance, but on doing a multiple run load test,
the performance gets a bit better as time goes on.

Tried all different heap sizes, profiled the app, verbose gc'd it, but
always around the same user mark the used RAM flies through the roof
and the system hangs for upto a 15mins. The app is running on a
seperate partition with a 2gig pool (even gone upto 6gig on shared
pool).

As for garbage collection, it goes ok, but at some point the live
objects exceeds collected objects, but then gets better. Sometime, we
get GC no (say 22) kicking off when GC 21 is just starting. GC 21
takes control, the live objects goes to 15 million (normally is 3
million), and then after GC 22 comes back. It then takes a few cycles
to get down to 6 million, but this whole time the collected objects
are 1-2 million, compared with a mass of live objects. When this
happens, the performance goes massively downhill (from being rubbish
anyway).

This morning, I've increased the max thread count to 120, min to 50,
and the test run performs better, but still bad. I noticed in the Web
Container / Threads counter that the threads go up, then at some point
they do an "upside down bathtub", i.e. hitting 50-55 active threads
for a while then shooting back down to 0-10, then back up again. Is
this normal under continued load ????

How can I see if threads are backing up ?

Any help is appreciated.

Pete.
 
S

Scorpio

"Pete" <[email protected]> ha scritto nel messaggio

First of all, I have to say that I'm not an OS400 expert.. I have to work
with iSeries, but my experience it's mainly with Java. So, my answers may
not be the best one you can obtain... Anyway:
The app is a basic war file, JTOpen to the client database, and it's
been tested / is stable on other servers and platforms os's, but not
WebSphere on iSeries.

Ok, I suggest you to use the native Toolbox (not the free source JTOpen,
that's very good on linux or windows WAS). In IBM's documentation, you
should find out a lot of suggestions...
We get sluggish performance, but on doing a multiple run load test,
the performance gets a bit better as time goes on.

I was told that OS400 assigns different "resource slices" to programs, based
upon using time... that may be the reason for this behaviour.
Tried all different heap sizes, profiled the app, verbose gc'd it, but
always around the same user mark the used RAM flies through the roof
and the system hangs for upto a 15mins. The app is running on a
seperate partition with a 2gig pool (even gone upto 6gig on shared
pool).

Your iSeries is really powerfull... can I ask which model ? Anyway 2 gig are
usually a huge amount for J2ee apps.
As for garbage collection, it goes ok, but at some point the live
objects exceeds collected objects, but then gets better. Sometime, we
get GC no (say 22) kicking off when GC 21 is just starting. GC 21
takes control, the live objects goes to 15 million (normally is 3
million), and then after GC 22 comes back. It then takes a few cycles
to get down to 6 million, but this whole time the collected objects
are 1-2 million, compared with a mass of live objects. When this
happens, the performance goes massively downhill (from being rubbish
anyway).

Well, if you had 6 million of object in use, it means your system is heavily
loaded... how many request does it serve ?
This morning, I've increased the max thread count to 120, min to 50,
and the test run performs better, but still bad. I noticed in the Web
Container / Threads counter that the threads go up, then at some point
they do an "upside down bathtub", i.e. hitting 50-55 active threads
for a while then shooting back down to 0-10, then back up again. Is
this normal under continued load ????
How can I see if threads are backing up ?

Any help is appreciated.

Thread policy is totally managed by WAS; apart of some settings, there's not
much more you can do... Even if this behaviour were not normal (I don't know
if it's normal or not, sorry), you can't do nothing about thread....

Do you use EJB ? If so, these thread may be in a "ready state" and waiting
to be assigned to request....

I'm afraid to have been not a great help...
Scorpio
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top