Actually, that's *including* references, whose value is a pointer.
This is a question I got wrong the first time I took a practice Java
test. All parameter passing in Java is pass-by-value. Period. Passing a
reference by value is not the same thing as passing by reference.
fair enough...
Currently, in the present, likely on your very computer, the
(commercial) JVM (at least in "-server" mode) will inline some things,
enregister others, and convert references into stack-based primitive
values. It's already doing the implementation strategies that have been
mentioned here.
...
ok, the question then is the best way to determine this statically.
an issue at the moment is that my OO system and JVM-related components
are in different libraries/subsystems, and so the level of inferences
are a little more limited here.
as a general rule, the OO facilities don't even assume that object
layout is frozen (the ability to change object layout at run-time is
supported, with each object identifying the particular version of the
class which created it).
an example would be adding some new fields to a base-class, which
involves reorganizing all derived classes, which can cause all
subclasses to take on new versions (new objects then have the new
layout, but old objects keep the old layout).
a little performance is paid for this though (freezing layout for
certain classes would allow further optimizing things like field access
and method dispatch).
in my VM at least, garbage objects can easily end up sitting around for
many minutes or more with the GC never realizing that they have become
garbage (my GC basically just uses concurrent mark/sweep, and garbage
will usually sit around until whenever is the next GC pass).
Currently, in the present, likely on your very computer, the
(commercial) JVM supports a number of GC strategies that address the
issues you raise, all important issues in the GC world.
There are a number of white papers related to the subject available on
the Oracle, erst Sun, site.
The generational strategy deployed by default doesn't require the GC to
"realize that [objects] have become garbage". True, collected objects
may never be finalized or released, but that's only if they needn't be.
Otherwise, GC only cares about live objects for the most part, and for
the most part in an idiomatic Java program those constitute somewhere
around one in twenty.
Java's GC is triggered by a need for memory rather than an observation
of end of life for some objects. Also, Java offers a smorgasbord of GC
implementations to leverage, say, multi-core platforms.
I use my own implementation so that I have complete control over the
implementation (and freedom to tinker around with the internals, try out
new ideas, ...), nevermind if it is slow or otherwise crap.
I use the Sun/Oracle JVM for other things though.
but, yeah, mine may be triggered either by running out of memory or
exceeding a threshold (around 70% of heap used). where it will try to
run the GC and hope the app doesn't notice (although, sadly, there is no
good way to handle allocations mid-GC, since the GC is mostly busy doing
its mark/sweep thing). otherwise, it would require either "emergency
back-up memory" which is safe to allocate during the GC, or interrupting
the GC in order to perform memory allocation tasks.
but, it is not a copying or generational GC, as these tend to conflict
with using the same GC for C code (the main use of the GC is actually
for stuff in C land). as a result, it is a conservative non-moving GC.
internally though, it does have some support for precise marking, ...
just no memory compaction or similar.
otherwise, I would have to use a different GC for HLL's than for C,
which has been done in the past, but this style fell into disuse mostly
because it tends to make things more awkward (and a concurrent precise
GC means much pain in registering and unregistering roots, which is
unreasonably awkward apart from explicit compiler support).
similarly, my current GC also allows optionally using reference-counting
(in addition to mark/sweep), but reference counts have the great
complexity of actually updating them correctly everywhere they are used,
which is unworkable in "C in general", limiting use mostly to controlled
areas.
ref-counting is done per-object (actually, it is technically used on all
objects, except that the default allocation mode has the ref count set
to 'many', effectively disabling the ref-count).
in general, the main optimization is to use it more like it was a
malloc/free allocator, where objects can be freed as soon as they are
known to be no longer be needed.
otherwise, the garbage sits around until the next time the GC runs, thus
preventing the same memory from being used for other stuff.
some other pieces of code optimize things by running their own local
MM/GC systems, usually operating under the "fill memory and discard
everything when done" model, which is faster, but any output has to be
manually copied out. this strategy is thus far mostly limited to
compiler-related components, as these tend to produce a lot of garbage
fairly quickly (and don't mix well with the main GC).
or such...