Out of memory problem

J

John Smith

We have some java applications deployed on tomcat.
From time to time the whole system freezes up.

It seems that there is not enough space for JVM. The maximum is set to
896MB.
Instead of just increasing the size I'd like to know whats causing the
problem.

I'm trying to use "-XX:printClassHistogram" parameter but the log file
is not generated.

I put this to tomcat5 config file:

JAVA_OPTS="-Xms512m -Xmx896m -XX:MaxPermSize=256m
-XX:HeapDumpPath=/tmp/java_pid<pid>.hprof
-XX:-HeapDumpOnOutOfMemoryError -XX:printClassHistogram"

I tried stoping the tomcat afterwords or causing the OutOfMemory
problem, but the file is not created.

I'm I doing wrong something?
Any suggestions?
 
M

Marcin Konopka

On 2010-09-02 said:
JAVA_OPTS="-Xms512m -Xmx896m -XX:MaxPermSize=256m
-XX:HeapDumpPath=/tmp/java_pid<pid>.hprof
-XX:HeapDumpPath=/tmp

-XX:-HeapDumpOnOutOfMemoryError -XX:printClassHistogram"

-XX:+HeapDumpOnOutOfMemoryError -XX:+PrintClassHistogram

You can also set these options on running jvm with jinfo;
similarly You can see class histogram or generate heap dump
with jmap.

Best regards
MK
 
J

John Smith

-XX:+HeapDumpOnOutOfMemoryError -XX:+PrintClassHistogram

Thank you very much. I still have a couple of questions.
When does the above generate a file? Upon OutOfMemory error or on
process termination?
You can also set these options on running jvm with jinfo;
similarly You can see class histogram or generate heap dump
with jmap.

Best regards
MK

I used jmap like jmap -histo -F pid and get something like this:

num #instances #bytes Class description
--------------------------------------------------------------------------
1: 183266 22430576 * ConstMethodKlass
2: 183266 14666160 * MethodKlass
3: 115336 10325256 char[]
4: 19461 10191176 * ConstantPoolKlass
5: 19461 8299040 * InstanceKlassKlass


This is just after server restart, so I suppose I should just wait some
time and maybe if there is a leak it will show up at the begining?

Any more recommendations?
 
L

Lew

John said:
Thank you very much. I still have a couple of questions.
When does the above generate a file? Upon OutOfMemory error [sic] or on
process termination?

It's called "HeapDumpOn..."?

The process does terminate, of course.
 
J

John Smith

John said:
Thank you very much. I still have a couple of questions.
When does the above generate a file? Upon OutOfMemory error [sic] or on
process termination?

It's called "HeapDumpOn..."?

The process does terminate, of course.

Ok, but I'm interested about PrintClassHistogram.
I'm not sure it this works with the same trigger (OnOutOfMemoryError)
and if the destination file/directory is the samo?
 
L

Lew

John said:
Thank you very much. I still have a couple of questions.
When does the above generate a file? Upon OutOfMemory error [sic] or on
process termination?
It's called "HeapDumpOn..."?

The process does terminate, of course.

John said:
Ok, but I'm interested about PrintClassHistogram.
I'm not sure it this works with the same trigger (OnOutOfMemoryError)
and if the destination file/directory is the samo?

<http://blogs.sun.com/watt/resource/jvm-options-list.html>
"-XX:+PrintClassHistogram ... Prints the all the java [sic] heap objects,
their instance count [sic] and total space they occupy in the heap. The only
downside is that you need to issue a SIGQUIT (see -Xsqnopause) [sic] which
will leave the app running but will dump all of this data to stdout. Very
useful to assist in identifying memory problems [sic] for example on a
production platform where an [sic] CPU intensive [sic] profiler cannot be used."
 
J

John Smith

John said:
Thank you very much. I still have a couple of questions.
When does the above generate a file? Upon OutOfMemory error [sic] or on
process termination?
It's called "HeapDumpOn..."?

The process does terminate, of course.

John said:
Ok, but I'm interested about PrintClassHistogram.
I'm not sure it this works with the same trigger (OnOutOfMemoryError)
and if the destination file/directory is the samo?

<http://blogs.sun.com/watt/resource/jvm-options-list.html>
"-XX:+PrintClassHistogram ... Prints the all the java [sic] heap
objects, their instance count [sic] and total space they occupy in the
heap. The only downside is that you need to issue a SIGQUIT (see
-Xsqnopause) [sic] which will leave the app running but will dump all of
this data to stdout. Very useful to assist in identifying memory
problems [sic] for example on a production platform where an [sic] CPU
intensive [sic] profiler cannot be used."

I tried before with kill -QUIT pid but it does not generate anything.
 
M

Marcin Konopka

<http://blogs.sun.com/watt/resource/jvm-options-list.html>
"-XX:+PrintClassHistogram ... Prints the all the java [sic] heap
objects, their instance count [sic] and total space they occupy in the
heap. The only downside is that you need to issue a SIGQUIT (see
-Xsqnopause) [sic] which will leave the app running but will dump all of
this data to stdout. Very useful to assist in identifying memory
problems [sic] for example on a production platform where an [sic] CPU
intensive [sic] profiler cannot be used."

I tried before with kill -QUIT pid but it does not generate anything.

Dumps of stacktraces (and class histograms in Your case) are printed to
stdout so, with default tomcat's configuration, You'll find them in
catalina.out.

MK
 
M

Marcin Konopka

I used jmap like jmap -histo -F pid and get something like this:

num #instances #bytes Class description
--------------------------------------------------------------------------
1: 183266 22430576 * ConstMethodKlass
2: 183266 14666160 * MethodKlass
3: 115336 10325256 char[]
4: 19461 10191176 * ConstantPoolKlass
5: 19461 8299040 * InstanceKlassKlass


This is just after server restart, so I suppose I should just wait some
time and maybe if there is a leak it will show up at the begining?

These are internal jvm objects used, for stuff like class'es bytecode,
string values etc. Yes - You can wait some time and see which classes
grows most. However You should be focused on Your application
classes (if You suspect that it leaks memory), rather than jvm's
internal objects.
Any more recommendations?

You wrote that You're experiencing freezes - but did You get
OutOfMemoryError? If You have freeze, but no OOM then You can try other
garbage collectors (ex. Throughput Collector - -XX:+UseParallelGC).

It may be also the case, that Your problem lies in PermGen, not the
heap. To check this You can use jstat. Start jvm with
-XX:MaxPermSize the same as -XX:permSize and -Xms same as -Xmx and
run jstat -gcutil with some interval. Start using application, and when
freeze occurs, take a look on jstat output. If "P" column is about 100%
then PermGen is Your bottleneck. If it's "O" column then it's heap.

regards
MK
 
J

John Smith

I used jmap like jmap -histo -F pid and get something like this:

num #instances #bytes Class description
--------------------------------------------------------------------------
1: 183266 22430576 * ConstMethodKlass
2: 183266 14666160 * MethodKlass
3: 115336 10325256 char[]
4: 19461 10191176 * ConstantPoolKlass
5: 19461 8299040 * InstanceKlassKlass


This is just after server restart, so I suppose I should just wait some
time and maybe if there is a leak it will show up at the begining?

These are internal jvm objects used, for stuff like class'es bytecode,
string values etc. Yes - You can wait some time and see which classes
grows most. However You should be focused on Your application
classes (if You suspect that it leaks memory), rather than jvm's
internal objects.
Any more recommendations?

You wrote that You're experiencing freezes - but did You get
OutOfMemoryError? If You have freeze, but no OOM then You can try other
garbage collectors (ex. Throughput Collector - -XX:+UseParallelGC).

It may be also the case, that Your problem lies in PermGen, not the
heap. To check this You can use jstat. Start jvm with
-XX:MaxPermSize the same as -XX:permSize and -Xms same as -Xmx and
run jstat -gcutil with some interval. Start using application, and when
freeze occurs, take a look on jstat output. If "P" column is about 100%
then PermGen is Your bottleneck. If it's "O" column then it's heap.

regards
MK

I think the PermGen is not the problem, because there is no error in the
Catalina.out. The funny thing is there is no OutOfMemory error at all. I
figured it out by checking the system and proces memory.
 
J

John Smith

<http://blogs.sun.com/watt/resource/jvm-options-list.html>
"-XX:+PrintClassHistogram ... Prints the all the java [sic] heap
objects, their instance count [sic] and total space they occupy in the
heap. The only downside is that you need to issue a SIGQUIT (see
-Xsqnopause) [sic] which will leave the app running but will dump all of
this data to stdout. Very useful to assist in identifying memory
problems [sic] for example on a production platform where an [sic] CPU
intensive [sic] profiler cannot be used."

I tried before with kill -QUIT pid but it does not generate anything.

Dumps of stacktraces (and class histograms in Your case) are printed to
stdout so, with default tomcat's configuration, You'll find them in
catalina.out.

MK

So I should look only at Catalina.out? There is no easy way to redirect
these 2 to some other output?
 
L

Lew

John said:
<http://blogs.sun.com/watt/resource/jvm-options-list.html>
"-XX:+PrintClassHistogram ... Prints the all the java [sic] heap
objects, their instance count [sic] and total space they occupy in the
heap. The only downside is that you need to issue a SIGQUIT (see
-Xsqnopause) [sic] which will leave the app running but will dump
all of
this data to stdout. Very useful to assist in identifying memory
problems [sic] for example on a production platform where an [sic] CPU
intensive [sic] profiler cannot be used."

but did write:
John said:
So I should look only at Catalina.out?

No, you should look at catalina.out. You might find the log files (pattern
like "catalina.2010-08-01.log") helpful.
There is no easy way to redirect these 2 to some other output?

I don't think there is, but catalina.out is an easy way itself.
 
L

Luuk

Op 07-09-10 11:30, John Smith schreef:
Caused by: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported
versions are 1.6.0-b09. Target VM is 17.0-b16

wrong version......?
 
J

John Smith

Op 07-09-10 11:30, John Smith schreef:

wrong version......?

It seems strange.
What version is supported than??

Machine which works:

java -version
java version "1.6.0_20"
Java(TM) SE Runtime Environment (build 1.6.0_20-b02)
Java HotSpot(TM) Server VM (build 16.3-b01, mixed mode)


Machine which doesn't work:

java -version
java version "1.6.0_21"
Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
Java HotSpot(TM) Server VM (build 17.0-b16, mixed mode)


The version before that was 1.6.0.18 I think.
 
J

John Smith

It seems strange.
What version is supported than??

Machine which works:

java -version
java version "1.6.0_20"
Java(TM) SE Runtime Environment (build 1.6.0_20-b02)
Java HotSpot(TM) Server VM (build 16.3-b01, mixed mode)


Machine which doesn't work:

java -version
java version "1.6.0_21"
Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
Java HotSpot(TM) Server VM (build 17.0-b16, mixed mode)


The version before that was 1.6.0.18 I think.

Sorry for the last post. It seems that my jre version is different than
jdk which has this jmap utility.
 
J

John Smith

I used jmap like jmap -histo -F pid and get something like this:

num #instances #bytes Class description
--------------------------------------------------------------------------
1: 183266 22430576 * ConstMethodKlass
2: 183266 14666160 * MethodKlass
3: 115336 10325256 char[]
4: 19461 10191176 * ConstantPoolKlass
5: 19461 8299040 * InstanceKlassKlass


This is just after server restart, so I suppose I should just wait some
time and maybe if there is a leak it will show up at the begining?

These are internal jvm objects used, for stuff like class'es bytecode,
string values etc. Yes - You can wait some time and see which classes
grows most. However You should be focused on Your application
classes (if You suspect that it leaks memory), rather than jvm's
internal objects.
Any more recommendations?

You wrote that You're experiencing freezes - but did You get
OutOfMemoryError? If You have freeze, but no OOM then You can try other
garbage collectors (ex. Throughput Collector - -XX:+UseParallelGC).

It may be also the case, that Your problem lies in PermGen, not the
heap. To check this You can use jstat. Start jvm with
-XX:MaxPermSize the same as -XX:permSize and -Xms same as -Xmx and
run jstat -gcutil with some interval. Start using application, and when
freeze occurs, take a look on jstat output. If "P" column is about 100%
then PermGen is Your bottleneck. If it's "O" column then it's heap.

regards
MK

I experienced the freeze again, and I captured manually the logs with jmap.
It seems the memory allocation is ok. Java uses around 500MB.
Minimum set is 512MB, maximum 1024MB, and MaxPermSize is set to 512MB.

This options (-XX:+HeapDumpOnOutOfMemoryError -XX:+PrintClassHistogram)
did not work. Probably because OutOfMemory did not happen.

Anyway I changed the minimum memory to 1024MB also, and set
-XX:+UseParallelGC.

I read about older java versions and garbage collector freezes if there
is a big difference between minimum and maximum amount of memory set.


Are there any other ideas?
 
K

Kevin McMurtrie

John Smith said:
I used jmap like jmap -histo -F pid and get something like this:

num #instances #bytes Class description
--------------------------------------------------------------------------
1: 183266 22430576 * ConstMethodKlass
2: 183266 14666160 * MethodKlass
3: 115336 10325256 char[]
4: 19461 10191176 * ConstantPoolKlass
5: 19461 8299040 * InstanceKlassKlass


This is just after server restart, so I suppose I should just wait some
time and maybe if there is a leak it will show up at the begining?

These are internal jvm objects used, for stuff like class'es bytecode,
string values etc. Yes - You can wait some time and see which classes
grows most. However You should be focused on Your application
classes (if You suspect that it leaks memory), rather than jvm's
internal objects.
Any more recommendations?

You wrote that You're experiencing freezes - but did You get
OutOfMemoryError? If You have freeze, but no OOM then You can try other
garbage collectors (ex. Throughput Collector - -XX:+UseParallelGC).

It may be also the case, that Your problem lies in PermGen, not the
heap. To check this You can use jstat. Start jvm with
-XX:MaxPermSize the same as -XX:permSize and -Xms same as -Xmx and
run jstat -gcutil with some interval. Start using application, and when
freeze occurs, take a look on jstat output. If "P" column is about 100%
then PermGen is Your bottleneck. If it's "O" column then it's heap.

regards
MK

I experienced the freeze again, and I captured manually the logs with jmap.
It seems the memory allocation is ok. Java uses around 500MB.
Minimum set is 512MB, maximum 1024MB, and MaxPermSize is set to 512MB.

This options (-XX:+HeapDumpOnOutOfMemoryError -XX:+PrintClassHistogram)
did not work. Probably because OutOfMemory did not happen.

Anyway I changed the minimum memory to 1024MB also, and set
-XX:+UseParallelGC.

I read about older java versions and garbage collector freezes if there
is a big difference between minimum and maximum amount of memory set.


Are there any other ideas?

Try different versions of the JVM. As far as I know, all versions of
Sun Java 1.5 have GC bugs that may cause long stalls, JVM crashes, and
logically impossible NPEs. (I've seen 1.5 HotSpot unroll loops
incorrectly too.) Recent builds of Sun Java 1.6 seem to work much more
reliably.

Also check that you don't have any kernel bugs related to the high
resolution system clock. If GC timestamps go to 0.00, you have the bug
and GC will auto-tune itself to oblivion.
 
J

John Smith

John Smith said:
I used jmap like jmap -histo -F pid and get something like this:

num #instances #bytes Class description
--------------------------------------------------------------------------
1: 183266 22430576 * ConstMethodKlass
2: 183266 14666160 * MethodKlass
3: 115336 10325256 char[]
4: 19461 10191176 * ConstantPoolKlass
5: 19461 8299040 * InstanceKlassKlass


This is just after server restart, so I suppose I should just wait some
time and maybe if there is a leak it will show up at the begining?

These are internal jvm objects used, for stuff like class'es bytecode,
string values etc. Yes - You can wait some time and see which classes
grows most. However You should be focused on Your application
classes (if You suspect that it leaks memory), rather than jvm's
internal objects.

Any more recommendations?

You wrote that You're experiencing freezes - but did You get
OutOfMemoryError? If You have freeze, but no OOM then You can try other
garbage collectors (ex. Throughput Collector - -XX:+UseParallelGC).

It may be also the case, that Your problem lies in PermGen, not the
heap. To check this You can use jstat. Start jvm with
-XX:MaxPermSize the same as -XX:permSize and -Xms same as -Xmx and
run jstat -gcutil with some interval. Start using application, and when
freeze occurs, take a look on jstat output. If "P" column is about 100%
then PermGen is Your bottleneck. If it's "O" column then it's heap.

regards
MK

I experienced the freeze again, and I captured manually the logs with jmap.
It seems the memory allocation is ok. Java uses around 500MB.
Minimum set is 512MB, maximum 1024MB, and MaxPermSize is set to 512MB.

This options (-XX:+HeapDumpOnOutOfMemoryError -XX:+PrintClassHistogram)
did not work. Probably because OutOfMemory did not happen.

Anyway I changed the minimum memory to 1024MB also, and set
-XX:+UseParallelGC.

I read about older java versions and garbage collector freezes if there
is a big difference between minimum and maximum amount of memory set.


Are there any other ideas?

Try different versions of the JVM. As far as I know, all versions of
Sun Java 1.5 have GC bugs that may cause long stalls, JVM crashes, and
logically impossible NPEs. (I've seen 1.5 HotSpot unroll loops
incorrectly too.) Recent builds of Sun Java 1.6 seem to work much more
reliably.

Also check that you don't have any kernel bugs related to the high
resolution system clock. If GC timestamps go to 0.00, you have the bug
and GC will auto-tune itself to oblivion.

I tried the latest version of java 1.6.0_21, but it's still the same.

Can you give me more information what to do exactly with the GC?
 
J

John Smith

John Smith said:
On 3.9.2010. 19:44, Marcin Konopka wrote:
I used jmap like jmap -histo -F pid and get something like this:

num #instances #bytes Class description
--------------------------------------------------------------------------

1: 183266 22430576 * ConstMethodKlass
2: 183266 14666160 * MethodKlass
3: 115336 10325256 char[]
4: 19461 10191176 * ConstantPoolKlass
5: 19461 8299040 * InstanceKlassKlass


This is just after server restart, so I suppose I should just wait
some
time and maybe if there is a leak it will show up at the begining?

These are internal jvm objects used, for stuff like class'es bytecode,
string values etc. Yes - You can wait some time and see which classes
grows most. However You should be focused on Your application
classes (if You suspect that it leaks memory), rather than jvm's
internal objects.

Any more recommendations?

You wrote that You're experiencing freezes - but did You get
OutOfMemoryError? If You have freeze, but no OOM then You can try other
garbage collectors (ex. Throughput Collector - -XX:+UseParallelGC).

It may be also the case, that Your problem lies in PermGen, not the
heap. To check this You can use jstat. Start jvm with
-XX:MaxPermSize the same as -XX:permSize and -Xms same as -Xmx and
run jstat -gcutil with some interval. Start using application, and when
freeze occurs, take a look on jstat output. If "P" column is about 100%
then PermGen is Your bottleneck. If it's "O" column then it's heap.

regards
MK

I experienced the freeze again, and I captured manually the logs with
jmap.
It seems the memory allocation is ok. Java uses around 500MB.
Minimum set is 512MB, maximum 1024MB, and MaxPermSize is set to 512MB.

This options (-XX:+HeapDumpOnOutOfMemoryError -XX:+PrintClassHistogram)
did not work. Probably because OutOfMemory did not happen.

Anyway I changed the minimum memory to 1024MB also, and set
-XX:+UseParallelGC.

I read about older java versions and garbage collector freezes if there
is a big difference between minimum and maximum amount of memory set.


Are there any other ideas?

Try different versions of the JVM. As far as I know, all versions of
Sun Java 1.5 have GC bugs that may cause long stalls, JVM crashes, and
logically impossible NPEs. (I've seen 1.5 HotSpot unroll loops
incorrectly too.) Recent builds of Sun Java 1.6 seem to work much more
reliably.

Also check that you don't have any kernel bugs related to the high
resolution system clock. If GC timestamps go to 0.00, you have the bug
and GC will auto-tune itself to oblivion.

I tried the latest version of java 1.6.0_21, but it's still the same.

Can you give me more information what to do exactly with the GC?

Now I have set JAVA_OPTS to this:

JAVA_OPTS="-Xms1024m -Xmx1024m -XX:MaxPermSize=256m -verbose:gc
-XX:+PrintGCTimeStamps -XX:+PrintGCDetails
-Xloggc:/var/log/tomcat5/gc_logs/gc.log"

I suppose this is enough?
 
K

Kevin McMurtrie

John Smith said:
John Smith said:
On 3.9.2010. 19:44, Marcin Konopka wrote:
I used jmap like jmap -histo -F pid and get something like this:

num #instances #bytes Class description
-------------------------------------------------------------------------
-
1: 183266 22430576 * ConstMethodKlass
2: 183266 14666160 * MethodKlass
3: 115336 10325256 char[]
4: 19461 10191176 * ConstantPoolKlass
5: 19461 8299040 * InstanceKlassKlass


This is just after server restart, so I suppose I should just wait some
time and maybe if there is a leak it will show up at the begining?

These are internal jvm objects used, for stuff like class'es bytecode,
string values etc. Yes - You can wait some time and see which classes
grows most. However You should be focused on Your application
classes (if You suspect that it leaks memory), rather than jvm's
internal objects.

Any more recommendations?

You wrote that You're experiencing freezes - but did You get
OutOfMemoryError? If You have freeze, but no OOM then You can try other
garbage collectors (ex. Throughput Collector - -XX:+UseParallelGC).

It may be also the case, that Your problem lies in PermGen, not the
heap. To check this You can use jstat. Start jvm with
-XX:MaxPermSize the same as -XX:permSize and -Xms same as -Xmx and
run jstat -gcutil with some interval. Start using application, and when
freeze occurs, take a look on jstat output. If "P" column is about 100%
then PermGen is Your bottleneck. If it's "O" column then it's heap.

regards
MK

I experienced the freeze again, and I captured manually the logs with
jmap.
It seems the memory allocation is ok. Java uses around 500MB.
Minimum set is 512MB, maximum 1024MB, and MaxPermSize is set to 512MB.

This options (-XX:+HeapDumpOnOutOfMemoryError -XX:+PrintClassHistogram)
did not work. Probably because OutOfMemory did not happen.

Anyway I changed the minimum memory to 1024MB also, and set
-XX:+UseParallelGC.

I read about older java versions and garbage collector freezes if there
is a big difference between minimum and maximum amount of memory set.


Are there any other ideas?

Try different versions of the JVM. As far as I know, all versions of
Sun Java 1.5 have GC bugs that may cause long stalls, JVM crashes, and
logically impossible NPEs. (I've seen 1.5 HotSpot unroll loops
incorrectly too.) Recent builds of Sun Java 1.6 seem to work much more
reliably.

Also check that you don't have any kernel bugs related to the high
resolution system clock. If GC timestamps go to 0.00, you have the bug
and GC will auto-tune itself to oblivion.

I tried the latest version of java 1.6.0_21, but it's still the same.

Can you give me more information what to do exactly with the GC?

Look up the options for verbose GC modes. You'll see exactly which
phase is stalling.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,223
Latest member
Jurgen2087

Latest Threads

Top