max heap size on 32 bit linux ?

I

iksrazal

Hi all,

We've recently aquired a new dual processor 3.0GHZ Xeon running Suse
9.2 . It has 4 gigs of ram. However, the JVM is crashing daily. Our app
has never shown this behavior before - though it never has run on a
machine with more than I gig of ram. Its a tomcat webapp running
hibernate. Here's the JVM args we pass to tomcat:

CATALINA_OPTS="-server -Xms512m -Xmx2048m -Xincgc -XX:permSize=128m
-XX:MaxPermSize=512m"

We're seeing errors like:

Exception in thread "CompilerThread0" java.lang.OutOfMemoryError:
requested 1801688 bytes for Chunk::new. Out of swap space?

and:

#
# An unexpected error has been detected by HotSpot Virtual Machine:
#
# SIGSEGV (0xb) at pc=0xb2b5027f, pid=3630, tid=38177712
#
# Java VM: Java HotSpot(TM) Server VM (1.5.0_06-b05 mixed mode)
# Problematic frame:
# J java.lang.StringCoding.encode(Ljava/lang/String;[CII)[B
#

Even though the machine's load average (top) is under 1. This machine
has a low load until it hits production.

I did set the ulimit:

ulimit -s 3072

I've also tried 2048.

So my question is: Can I safely allocate 2048 megs on 32 bit intel
running linux, with 4 gigs total and nothing else running that's heavy?
What's the max I can allocate?

iksrazal
http://www.braziloutsource.com/
 
R

Roedy Green

Exception in thread "CompilerThread0" java.lang.OutOfMemoryError:

I saw that message for the first time a couple of days ago. I had
simply been running applets in Opera with JRE 1.5.0_06.. I have not
yet tracked down what is going on.

There may be some app going nuts allocating stuff. Or it could be a gc
bug not collecting trash, or the JVM could be packratting. You need a
profiler to tell you want all the junk is that is accumulating.

There is one other horrible possibility that a hardware error in ram
gets reported as an out of memory error. I hope that is not so.

see http://mindprod.com/jgloss/profiler.html
http://mindprod.com/jgloss/packratting.html
 
D

Douwe

The question is: why did you buy a big machine ? probably since the
tomcat was crashing all the time. Now you've got bigger machine and
still it crashes. So are you running this all inside a cluster? If not
then install something like a VMWare or Xen, install linux twice on the
same machine and then run a tomcat on both linux systems and run them
in cluster mode.

If this is absolutely not what you want to then you should A: check if
you have enough Swap Space since the error report sais "Out of swap
space?" and/or B: report a bug at java.sun.com.
 
I

iksrazal

Douwe escreveu:
The question is: why did you buy a big machine ? probably since the
tomcat was crashing all the time. Now you've got bigger machine and
still it crashes.

The jvm never crashed in our test environments, spanning several
different machines over a years time. Nor on any developers machine. We
simply bought a bigger machine because the app is going into production
soon. And so far there's only been light testing.

iksrazal
http://www.braziloutsource.com/
 
M

masseyke

I saw strange error messages like this running my Java application on
Linux. The problem turned out to be a memory leak in some native code
I was using -- Oracle JDBC drivers. The JVM was not running out of
memory -- the operating system was. Java was throwing an
OutOfMemoryError and other errors not because it had hit its maximum
heap size, but because the operating system would not allocate it any
more memory. Fortunately for me, there was a pure Java implementation,
and after I switched to that the problem went away.
I don't know if this helps you at all, but you might check into which,
if any, native code your application is using. And it doesn't explain
why the problems would have started happening just recently.

Keith
 
L

Luc The Perverse

Douwe escreveu:


The jvm never crashed in our test environments, spanning several
different machines over a years time. Nor on any developers machine. We
simply bought a bigger machine because the app is going into production
soon. And so far there's only been light testing.

iksrazal
http://www.braziloutsource.com/

It's possible the computer itself is unstable.

Test ram with a knoppix or linux install disk.

If it passes, run a burn in test.
 
H

hussainak

well i will suggest try installing latest Red Hat and test ...try with
both gjc and jvm
 
T

tom fredriksen

You could possibly try to use vmstat and such tools to track the system
memory to see which process is using what memory. By doing this, you at
least, are able to verify where the memory problem is.

There are is a linux tool called memtest86 which performs all from
simple to complex memory testing, which you could use the check for
memory problems. It can take a while for the complex tests though,
perhaps a day or two (it should run a couple of iterations).

There is also a linux hardware diagnostics program (dont remember the
name of it) which will test the entire system to see if there are any
devices or components with problems.

If none of these things turn up anything then your best bet might
probably be to profile or diagnose the program it self.

/tom
 
O

ossie.moore

This has to do with the max memory a particular process can occupy. I
believe the *default* value most vendors us in workstation editions of
Linux is 2GB of ram per process (minus some overhead). You can
reconfigure your kernel manually to allow more memory, or, if you are
not comfortable doing that, purchase an enterprise edition support
package.

Here's an old post i found by googling for "linux max memory per
process"...
http://lists.samba.org/archive/linux/2005-November/014552.html
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top