how to enlarger Java heap size?

M

Marteno Rodia

Hello,
I've written an application which consumes a lot of memory. Along with
optimalization, I hope I can enlarge the memory which is allocated to
JVM. I use options -Xmx and -Xms, but it seems that the heap size
greater than 1 GB is not allowed. Is it true? Can I work it around
somehow?
 
M

Marcin Rodzik

On 32 bit systems it is limited to 2GB (or just under).

I'm not glad because of that but thanks! ;)
MR
 
L

Lew

Peter said:
I guess that depends on how much of an abstraction one believes Java
should provide. Java _could_ provide larger heaps by not depending
directly on the OS to allocate the space.

But it's not Java that makes such a thing the wrong thing to do.
It's my opinion that the current implementation is the correct one;
almost everyone who might take advantage of a larger Java heap on a
32-bit OS would quickly find that for performance reasons, they really
don't want to do that on a 32-bit OS.

Right. Its not Java's job to run with whatever metasetting (-Xmx greater than
2 GB) you throw at it; only to do so within the limitations of the host
environment. The JVM cannot magically buy you more RAM nor alter the OS's
memory-allocation system calls. The portability promise only extends to the
language, not the platform. That Java chooses not to work around (I shall not
say "solve") a limitation imposed by the 32-bitness of the host system, is not
a flaw in Java but a refusal to do the unnecessary. They offer a 64-bit
version precisely to address that limitation.

I note that the same exact bytecode runs with identical functionality in the
64-bit version as in the 32-bit version, keeping Java's portability promise.
But I think it's safe to say that it's at least a _little_ bit "Java's
fault". It's not like it would have been _impossible_ for them to
support larger heaps on 32-bit OSs. :)

I understand that you are exercising some rhetorical license here, and I grant
you full marks for it.

You are absolutely correct in your analysis, however you require a somewhat
idiosyncratic albeit somewhat defensible definition of "fault". I propose we
give this one to Java's designers and not be too upset that they didn't try to
implement 64-on-32. If it's a sin to be pragmatic and reasonable, then yes,
Java is at fault.
 
A

Arne Vajhøj

Peter said:
I guess that depends on how much of an abstraction one believes Java
should provide. Java _could_ provide larger heaps by not depending
directly on the OS to allocate the space.
But I think it's safe to say that it's at least a _little_ bit "Java's
fault". It's not like it would have been _impossible_ for them to
support larger heaps on 32-bit OSs. :)

For all practical purposes impossible.

It is not just that Java would not be able to use the OS to
allocate memory. Java would not be able to allocate the memory
in any way.

So Java would not be able to use memory as backing for its variable.

In theory Java could use an SQL database as backing for variables,
but it would be a factor a million slower.

Arne
 
A

Arne Vajhøj

Peter said:
Not really. There are lots of programs out there that deal with > 2GB
of data on a 32-bit OS. Video and audio editing software, for example.

Sure.

And they could also be written in Java.

But having a program able to handle data >2GB by explicit
coding and having the Java compiler & JVM allowing data
structures >2GB transparently are two completely different
things.

So that proves nothing.
Java has complete control over the native code executed and how a data
structure is accessed. There's no reason at all it couldn't partition
an overly-large object and simply "window" access into it as needed.
This is, in fact, exactly what other applications that need to support >
2GB data structures on 32-bit OS's do. Sometimes they work directly
with the file system, in other cases they let Windows do a lot of the
heavy-lifting by using memory-mapping and the virtual-alloc functions.
But they do it.

Which of those application you mention does that transparently for
the code?
And in fact, this is basically what a virtual memory system does anyway,
when the available RAM is smaller than the object. It simply provides a
mapping between what the executing code wants to see at a particular
moment, and the underlying storage.

Virtual memory is implemented in the CPU itself.

It is a lot faster than executing code.
Yes, the native code generated (when native code is generated) would be
a lot more complicated. And it would be potentially slow, especially if
access to the large object was very random. But it's certainly doable.

Lots of things is doable. That does not imply practical or relevant.
The fact that _one_ possible implementation is impractical doesn't mean
that _all_ possible implementations would be impractical.

No. But all possible implementation would replace the CPU's address
translation with executing code and be a couple of magnitudes
slower.

If that were not the case, then I think somebody would have
implemented what you describe.

Arne
 
T

Tom Anderson

Usually. Sometimes not.

[snip]

Good stuff, Thomas. This is an area i've always been rather interested in
- what would a machine with a VM at the very bottom of the stack look
like? I've long suspected it would be quite a bit faster, not to mention
more stable, and so this anecdote:
A friend of mine performs long running computations which involve many
random accesses in a large area (sieving for discrete logarithm). He
told me that by running his code (written in C) with a custom loader in
lieu of an OS, he increased speed by a factor of three

Was good to hear!
To sum up, things are not so easy, and this is (or used to be) an active
research area.

If people are interested in history, two significant attempts at building
OSs with 'soft' memory protection, which did not need virtual memory, were
Genera, written in LISP, and SPIN, written in Modula-2. I get the
impression that the silicon is now sufficiently fast that in most cases
it's not a big win, and so research on this has died out.

tom
 
D

Dan Polansky

Following up on the subject title "how to enlarger Java heap size?"
rather than on the recent discussion:

Is there a way to tell Java on the command line to take as much heap
size as the physical memory or the virtual memory of the computer
allows?

I have a launcher written in C for Windows that runs a Java
application. I can tell the launcher to pass "-Xmx1024M" to "javaw ",
but then the launcher won't work on computers that have less memory,
which is why I use "-Xmx256M", to be on the safe side. What I would
really like to tell JRE is: "Look, take as much memory for the heap as
the physical memory or the virtual memory of the computer allows". I
do not know in advance what the physical memory of user's computer
will be.

--Dan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,058
Latest member
QQXCharlot

Latest Threads

Top