Chris Uppal wrote (in two different posts):
There is certainly a similar concept (in fact I suspect that auto-boxing got
added to Java in part as a "me too" response to .NET). I don't know enough
about .NET (in in particular its notion of value objects) to be sure that the
two concepts are actually the same.
Just to fill in a little, they are a little different, in that with
..NET, there are value types which can be user-defined much like
reference types can. Each value type has a corresponding object type
to/from which is is boxed or unboxed. The behavior is not specific to
any specific list of primitive types.
There are fairly well-known techniques for avoiding that overhead. The most
widely used is to encode "some" values directly in what would otherwise be
pointers (pointers to objects in almost any sane implementation will have some
spare low-bits since the objects won't be aligned on arbitrary
byte-boundaries).
This is certainly not a horrible idea, but it is certainly possible to
do better. Runtime tagging does carry some performance cost, and some
cost in ranges of data types (admittedly less important).
Alternatively, I suspect that the state of the art in dynamic optimisation has
advanced far enough that nearly all (performance critical) use of boxed
integers could be optimised away.
Yes, with qualifications. In the general case, this is a global
analysis, which makes it expensive even if possible. With a few simple
language changes, it's actually trivial. If I got to wave my magic wand
and change Java, here's what I'd do.
1. Everything is an object.
2. Variables (which are now always references) are not nullable by
default, and a special syntax exists to make them nullable.
3. Reference comparison is exposed ONLY through the default
implementation of Object.equals. a==b is shorthand for saying
(a is null ? b is null : a.equals(b)). If a class overrides equals (and
doesn't expose the original in some way) then reference comparison is
not possible.
The result is a language in which it's easily type-verifiable that what
is logically a reference to some object can always be replaced by
storing a value directly in a variable under a set of conditions that
are trivially definable (namely: the class is immutable, final, and
completely hides reference comparison). Furthermore, the JIT compiler
can make judgement calls about when to do one versus the other,
depending on the size of the data in the object, how often the variable
is used in a context that requires a true object, etc. Those judgements
are potentially statistically optimizable.
But, of course, the shop has sailed for fundamental changes to language
design. Gotta save those changes for the next big language, then.