Out Of Memory Error and Stack Size

  • Thread starter karthikeyan.jambulingam
  • Start date
K

karthikeyan.jambulingam

Dear all,

Got out of memory error while trying to run my application with a
HeapSize of 600mx in a Linux ( RH 9.0) box with 1 GB Ram size The
error obtained is :

"Fatal: Stack size too small Use java -Xss to increase default stack
size " .

What does this error denote ? Have any one faced this .

The same application worked fine when the Heap size was reduced to
300mx . The application creates 150 native threads and uses jdk 1.4.2.

However the application worked fine in another linux box with the same
configuration with 600mx as heap size. Was not able to find out any
reason for this ??? .

It will be really helpful if you can throw some insights over this
problem ..... . Thanks in advance for the response.

Regards
JK
 
G

Green

You are confusing heap size and stack size. Heap is where objects are
created. Stack is where threads are created. The default stack size of
512k may not be sufficient for your 150 threads. To set thread stack
size use -Xss<size>. Example -Xss512k.

Thanks.
 
K

karthikeyan.jambulingam

Hi,

If this is due to recursion , than how come this did not occur when the
Heap Size is reduced . It would be really helpful is is can get a clear
picture of what is what ????

thanks
JK
 
J

James McGill

What does this error denote ? Have any one faced this .

Are you legitimately going extremely deep in your nested method calls,
or do you have an error like an endless loop or recursion without a
termination condition?

There are situations where increased stack space is genuinely necessary,
such as on web services where many threads are created. But unless
you're doing something extraordinary, you probably have a recursive call
that's not terminating.
 
J

James McGill

If this is due to recursion , than how come this did not occur when the
Heap Size is reduced .

Heap is for live objects, Stack is for object references. Thread stack
size used to be fixed at 2MB, which meant there was a hard limit to the
possible number of threads. Now it's tunable (-Xss), but I think it
still defaults to 2MB.
It would be really helpful is is can get a clear
picture of what is what ????

If you want a clear picture of what's going on in your JVM, especially
in areas like Memory Management and Threads, you might want to look at
JProfiler.
 
K

karthikeyan.jambulingam

Hi,

Thanks for all your replies.

I also know the fact that Heap Size and Stack Size are different. But
the problem here is , the same application works fine in Windows and
another Linux box irrespective of the Heap Size. If this a recursion
problem or has crossed the limit of allowed stack size , than should
not this occur in all the machines ?? Am i sounding absurd .

Is there any thing like maximum stack size allowed for an application
in Linux. If so , how we shall change it ?? Will Stack size be
affected by any other factors ??

Regards
JK
 
R

Roedy Green

R

Roedy Green

If this a recursion
problem or has crossed the limit of allowed stack size , than should
not this occur in all the machines ??

Different JVMs could implement the stack differently. Some may
allocate a fixed block. Some may allocate several chunks. Some may be
have like arraylists growing as needed but maintaining a contiguous
chunk.

there are all kinds of possible internal differences that would make a
program behave differently in the same amount of physical or virtual
RAM.

For a start, RISC code to do the exact same thing tends to be more
bulky than CISC code.

It is like trying to grab a balloon full of water. JVMs are permitted
to adapt and squirm.

One implementation may fill its ram with versioned highly optimised
code, another disk cache, another ...
 
T

tom fredriksen

Hi,

Thanks for all your replies.

I also know the fact that Heap Size and Stack Size are different. But
the problem here is , the same application works fine in Windows and
another Linux box irrespective of the Heap Size. If this a recursion
problem or has crossed the limit of allowed stack size , than should
not this occur in all the machines ?? Am i sounding absurd .

This is all normal. F.ex. different linux distributions have different
settings for system resources, e.g files open, max shared memory, max
threads/processes, temporary space etc. Red Hat should be tuned for
enterprise use, so the fact that you experience this on red hat is a bit
surprising. But in any case you need to try and figure out what limit on
the system is hitting the roof, and then reconfigure that limit.

So when using a heap size of 300MB your are within the systems limits,
but when setting it to 600MB the limit is reached. Posible solution
could be the size of swap space or the /tmp partition. Try asking a
linux group for further hints and tools.

/tom
 
N

Nigel Wade

Hi,

Thanks for all your replies.

I also know the fact that Heap Size and Stack Size are different. But
the problem here is , the same application works fine in Windows and
another Linux box irrespective of the Heap Size. If this a recursion
problem or has crossed the limit of allowed stack size , than should
not this occur in all the machines ?? Am i sounding absurd .

Maybe yes, and maybe no. You may be flying very close to the wind on the other
Linux box, and just by chance (or some other local factor) you remain just
below the stack size limit.
Is there any thing like maximum stack size allowed for an application
in Linux. If so , how we shall change it ?? Will Stack size be
affected by any other factors ??

On Linux the command 'ulimit -s' will tell you your per-process stack size
limit. But I believe this is different from the Java stack limit as Java
handles its own stack internally rather than on the process stack (is this
right?). Of course it's possible that a process stack overflow is caught by
Java and reported as above.

Have a look on both Linux systems to see what the stack size limit is. If there
is a difference see if you can get the limit increased on the box where it
fails.
 
G

Green

You may be running into a problem with the default stack size for
threads. With the 1.2 system, the default is 128k, but for HotSpot on
Sparc it is 512K and HotSpot on Solaris Intel its 256k (with Linux
Intel and Windows it is whatever the default stack size is when
creating a thread in the OS).

Reduce your stack size by running with the -Xss option.
For example: java -server -Xss64k

64k is the least amount of stack space allowed per thread.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top