why in class Boolean, hashcode() of "true" is 1231 and of "false" is1237?

L

Lasse Reichstein Nielsen

Mark Space said:
Choose 10 random numbers between 1 and 16. How many will be duplicates?

On average it's approx. 2.4 collisions. Here we get three, which has a ~30%
chance of happening.
I.e., this really is nothing out of the ordinary.

/L
 
M

Mark Space

Lasse said:
On average it's approx. 2.4 collisions. Here we get three, which has a ~30%
chance of happening.
I.e., this really is nothing out of the ordinary.

That's exactly what I was thinking, just too lazy to actually math it out.
 
P

public boolean

Lew said:
How do you figure? I see an exact 50%-50% distribution.

In actual practice rather than in theory. (Surely you don't believe that
in actual production code all of the integers from -2147483648 to
2147483647, inclusive, occur exactly as frequently as one another?)

If you look at the integers that pop up in actual usage, you'll find
that smaller integers are more common than larger ones; even ones more
common than odd; generally, ones with many factors are more common; ones
with larger powers of two as factors are more common; primes other than
2, 3, and 5 are relatively uncommon; and large primes are rare.

Loops from 0 to N-1 will be partly responsible for the small-integer
bias but partly masking the factor-based biases, since they'll hit
everything from 0 to N-1; in particular, 0 to 2N-1 has equal numbers of
odd and even integers and 0 to 2N has one extra even integer so odd/even
will be almost in balance in such loops.

Since such loops don't contribute to the possible-hash-key population
(when you're looping over the whole range of indices, you use an array;
when you use a sparse array, you use map.entrySet()) the bias in keys
will be greater than the bias in used integers in general, save for the
bias towards smaller integers, and that bias will nonetheless still be
present.
 
P

public boolean

Mark said:
I just want to point out that Java's hash map doesn't use a straight
mapping between object.hashCode() and the hash index values.

Interesting. That improves the situation somewhat for bit-correlated
input hashes, at a performance cost whenever a hash is added or looked
up instead of only when the hash is generated. (The hash of a particular
Object can be, and often is, calculated once and then stored. The
identity hash definitely is, as is the String hash. Therefore the hash
may be generated less often than it is used. The hash may also be used
other than in a hash table -- Object's toString seems to use it in
combination with the class name, for instance. Therefore the hash may be
generated more often as well. In practice, however, it is probably
generated less often on average.)
 
P

public boolean

Lasse said:
On average it's approx. 2.4 collisions. Here we get three, which has a ~30%
chance of happening.
I.e., this really is nothing out of the ordinary.

The example might have been better. The basic point remains -- the less
correlated, bit-correlated, or clustered the hash values, the better, as
a general rule.
 
P

public boolean

Lew said:
public said:
Eric said:
Nonsense. If the input consists of 100 zeroes, 10 ones,
and one each of 2..9, the hashes will consist of 100 h(0)s,
10 h(1)s, and one each of h(2..9). Nothing has changed,
except possibly for the worse if h(0)==h(6), say.

[... more pseudo-numerological nonsense ...]

Your insulting attitude in response to a perfectly civil post baffles me.

Oh, Christ, it *is* Twisted again. I was afraid of that.

Quit hiding who you are, or are you embarrassed to admit it?

Re-plonk.

???

Can anyone else make any sense of this post?
 
P

public boolean

Eric said:
public said:
Eric said:
Nonsense. If the input consists of 100 zeroes, 10 ones,
and one each of 2..9, the hashes will consist of 100 h(0)s,
10 h(1)s, and one each of h(2..9). Nothing has changed,
except possibly for the worse if h(0)==h(6), say.

[... more pseudo-numerological nonsense ...]

Your insulting attitude in response to a perfectly civil post baffles me.

I apologize for the tone of my response, and wish I had
written less antagonistically. However, I stand by the
content.

Including your insulting assertion that something I'd written was
"nonsense"? Or just the more technical and meaningful bits?

If the latter, apology accepted.

FWIW, you're not the only one that's written antagonistically. Lew has
written several more antagonistic posts here than you have, and one
completely baffling one that seems to have nothing whatsoever to do with
the post it replies to, or with Java, or with the price of tea in China
for that matter.
"Even integers are more common than odd" is a claim that
fails to convince me.

Fine. I stand by it nonetheless.
But even if it were true, I do not see how it could matter.
Have you studied the implementation of java.util.HashMap?

No. It might even be a Sun trade secret, for all that I know about it.
static int hash(int h) {
h ^= (h >>> 20) ^ (h >>> 12);
return h ^ (h >>> 7) ^ (h >>> 4);
}

Watch it -- Sun might sue. :)

This might help, or just waste time, depending. If there are bit-wise
correlations between two hash values, they will remain in the output of
this in some transformed and subtler form. It looks like each output bit
depends on nine of the input bits, including at least three of the
highest several bits. Unfortunately that won't wipe out all correlation.
A low bit usually being clear (for example, in a population of mostly
even numbers) could affect the probabilities of several bits having a
particular value. Use of xor means one completely uncorrelated bit in
the input is enough to make the output bits it affects 50/50, at least,
but not enough to assure that they're uncorrelated with one another in
some way induced by the input's bit-biases.

In the case of Integer keys, most of the high bits will often be zero,
due to the small-number bias, which reduces the effectiveness of the
above further, since xor by zero is a no-op.

Multiplying by a prime number in the range suggested would work better
and might not be any slower on modern hardware.
That is, bit 0 of the effective hash value depends on all of
bits 0, 4, 7, 12, 16, 19, 20, 24, and 27 of the hashCode().
The choice of even-numbered or odd-numbered bucket depends
on nine bits of the hashCode(), not on bit 0 alone.

An improvement in that one particular case, yes. How does having most of
the higher bits be zero affect that? In the case of keys from 0 to 999,
that means bits 10 and up, leaving of the above only 0, 4, and 7.
Again, I apologize for the tone of my earlier message.
But I still think your arguments are wide of the mark.

I saw what could be a weakness in a particular hash function (one that
doesn't exist if you assume all Integer values will occur equally often,
mind you). I pointed it out. It may be somewhat mitigated by what you've
revealed here. It could be eliminated entirely, particularly the
small-number bias, as indicated previously. (The hash method you posted
above could do it, instead of the Integer hash, affecting every hash
used in HashMap. Though my preference would be for HashMap to be fast
and individual objects' hash calculations do the shuffling. Actually,
since the identities of objects used as keys should not change, there
really should have been some notion of immutable objects in Java from
the beginning, and equals and hashCode only applicable to those. Then
hashCode could have been coded to call a protected method to calculate
the actual hash when it's first used, mangle the result, and cache it.
The same bits discussed recently here as storing the identity hash could
have been used to store the object hash in general -- assuming identity
versus equals never matters for immutable objects, anyway -- and other
objects would use the identity hash and equals always. Indeed equals
could go away then and == be applied differently to immutable objects in
that case -- if identity-equals, true, otherwise equals() rather than if
identity-equals, true, otherwise false. But I suppose it's way, way too
late to make changes that drastic to Java!)
 
A

Arne Vajhøj

Lew said:
public said:
Eric said:
Nonsense. If the input consists of 100 zeroes, 10 ones,
and one each of 2..9, the hashes will consist of 100 h(0)s,
10 h(1)s, and one each of h(2..9). Nothing has changed,
except possibly for the worse if h(0)==h(6), say.

[... more pseudo-numerological nonsense ...]

Your insulting attitude in response to a perfectly civil post baffles me.

Oh, Christ, it *is* Twisted again. I was afraid of that.

Quit hiding who you are, or are you embarrassed to admit it?

Re-plonk.

You.

You will need those GB for the kill file - he creates a new
identity almost every week.

Arne
 
A

Arne Vajhøj

public said:
Lew said:
public said:
Eric Sosman wrote:
Nonsense. If the input consists of 100 zeroes, 10 ones,
and one each of 2..9, the hashes will consist of 100 h(0)s,
10 h(1)s, and one each of h(2..9). Nothing has changed,
except possibly for the worse if h(0)==h(6), say.

[... more pseudo-numerological nonsense ...]

Your insulting attitude in response to a perfectly civil post baffles
me.

Oh, Christ, it *is* Twisted again. I was afraid of that.

Quit hiding who you are, or are you embarrassed to admit it?

Re-plonk.

???

Can anyone else make any sense of this post?

Paul: Anyone that has read this group for a few weeks can.

Arne
 
E

Eric Sosman

public said:
Eric said:
public said:
Eric Sosman wrote:
Nonsense. If the input consists of 100 zeroes, 10 ones,
and one each of 2..9, the hashes will consist of 100 h(0)s,
10 h(1)s, and one each of h(2..9). Nothing has changed,
except possibly for the worse if h(0)==h(6), say.

[... more pseudo-numerological nonsense ...]

Your insulting attitude in response to a perfectly civil post baffles
me.

I apologize for the tone of my response, and wish I had
written less antagonistically. However, I stand by the
content.

Including your insulting assertion that something I'd written was
"nonsense"? Or just the more technical and meaningful bits?

I apologized, and still apologize, for the tone, as I said.
And I stand by the content: Your arguments are nonsense, founded
on the numerological superstitions that (for reasons that have
never been clear to me) surround hash structures.
Even integers are more common
than odd, [...]

"Even integers are more common than odd" is a claim that
fails to convince me.

Fine. I stand by it nonetheless.

Evidence? We don' need no steenkin' evidence!
No. It might even be a Sun trade secret, for all that I know about it.

Tou clearly know very little (not an insult, but a fact
subject to objective verification). Have you ever installed
Sun's JDK? Have you ever looked at the ZIP of Java source files
provided as part of that JDK?
Watch it -- Sun might sue. :)

What's funny? Either Sun will sue, in which case there's
nothing funny about it, or you're just being silly, in which
case ditto. (Yes, the tone of this paragraph is antagonistic,
but IMHO you are begging for it.)
This might help, or just waste time, depending. If there are bit-wise
correlations between two hash values, they will remain in the output of
this in some transformed and subtler form. It looks like each output bit
depends on nine of the input bits, including at least three of the
highest several bits. Unfortunately that won't wipe out all correlation.
A low bit usually being clear (for example, in a population of mostly
even numbers) could affect the probabilities of several bits having a
particular value. Use of xor means one completely uncorrelated bit in
the input is enough to make the output bits it affects 50/50, at least,
but not enough to assure that they're uncorrelated with one another in
some way induced by the input's bit-biases.

*No* function of non-random data can eliminate the non-
randomness. No, not even the multiplication by phi that catches
your fancy. If the input values are non-random, any function of
those values will also be non-random, no matter how you choose to
scramble them. This is not news.
In the case of Integer keys, most of the high bits will often be zero,
due to the small-number bias, which reduces the effectiveness of the
above further, since xor by zero is a no-op.

Multiplying by a prime number in the range suggested would work better
and might not be any slower on modern hardware.

"Would work better?" Sort of depends on (1) the internals
of the hash implementation, of which you profess ignorance, and
(2) the distribution of the inputs, for which you offer only
unsupported claims that you "stand by, nonetheless." Evidence?
We don' need no steenkin' evidence!
An improvement in that one particular case, yes. How does having most of
the higher bits be zero affect that? In the case of keys from 0 to 999,
that means bits 10 and up, leaving of the above only 0, 4, and 7.

So your fears of overloading the even-numbered buckets (if
we're to believe your no-steenkin'-evidence claim about the
prevalence of even numbers) are diminished by a factor of four
at the least, right? And what's this "one particular case" you
refer to? Which integer value is the one you worry about? 42?
I saw what could be a weakness in a particular hash function (one that
doesn't exist if you assume all Integer values will occur equally often,
mind you). I pointed it out. It may be somewhat mitigated by what you've
revealed here. It could be eliminated entirely, particularly the
small-number bias, as indicated previously. (The hash method you posted
above could do it, instead of the Integer hash, affecting every hash
used in HashMap. Though my preference would be for HashMap to be fast
and individual objects' hash calculations do the shuffling. Actually,
since the identities of objects used as keys should not change, there
really should have been some notion of immutable objects in Java from
the beginning, and equals and hashCode only applicable to those. Then
hashCode could have been coded to call a protected method to calculate
the actual hash when it's first used, mangle the result, and cache it.
The same bits discussed recently here as storing the identity hash could
have been used to store the object hash in general -- assuming identity
versus equals never matters for immutable objects, anyway -- and other
objects would use the identity hash and equals always. Indeed equals
could go away then and == be applied differently to immutable objects in
that case -- if identity-equals, true, otherwise equals() rather than if
identity-equals, true, otherwise false. But I suppose it's way, way too
late to make changes that drastic to Java!)

I'm sorry, but I'm unable to extract sense from this
paragraph. You seem to be saying that List and StringBuilder
and Dimension should not have .equals() methods -- and although
I disagree with that assertion, it's a debate for some other
thread and has no obvious bearing on the matter of Integer's
hashCode(). Perhaps when I'm older and wiser I'll understand
you, but for the moment the logic of your contention eludes me.

And yes, I'm now being antagonistic. And I'm no longer
sorry for it.
 
J

Joshua Cranmer

public said:
No. It might even be a Sun trade secret, for all that I know about it.


Watch it -- Sun might sue. :)

Sun open-sourced their implementation of Java to create OpenJDK which,
starting with version 7, will be the official Java. Even before, the
source code was made public through the (I believe) Sun Research License.

I have referred numerous times to the OpenJDK codebase, even providing
links to a few files.
 
J

Joshua Cranmer

Eric said:
"Would work better?" Sort of depends on (1) the internals
of the hash implementation, of which you profess ignorance, and
(2) the distribution of the inputs, for which you offer only
unsupported claims that you "stand by, nonetheless." Evidence?
We don' need no steenkin' evidence!

"Premature optimization is the root of all evil."

The best hash functions require careful tuning for the distribution of
inputs. If another input distribution actually happens, the same hash
functions might present horrendous results. My set of integers probably
has a vastly different distribution from your set of integers.

If it ever turns out that the hash function is particularly bad for your
distribution, you could always write a wrapper class that picks a
different hash code.
 
A

Arne Vajhøj

Joshua said:
Sun open-sourced their implementation of Java to create OpenJDK which,
starting with version 7, will be the official Java. Even before, the
source code was made public through the (I believe) Sun Research License.

I have referred numerous times to the OpenJDK codebase, even providing
links to a few files.

Don't expect "public boolean" to know anything about Java.

Arne
 
P

public boolean

Arne said:
public said:
Lew said:
public boolean wrote:
Eric Sosman wrote:
Nonsense. If the input consists of 100 zeroes, 10 ones,
and one each of 2..9, the hashes will consist of 100 h(0)s,
10 h(1)s, and one each of h(2..9). Nothing has changed,
except possibly for the worse if h(0)==h(6), say.

[... more pseudo-numerological nonsense ...]

Your insulting attitude in response to a perfectly civil post
baffles me.

Oh, Christ, it *is* Twisted again. I was afraid of that.

Quit hiding who you are, or are you embarrassed to admit it?

Re-plonk.

???

Can anyone else make any sense of this post?

Paul: Anyone that has read this group for a few weeks can.

Who is Paul? I don't see that name anywhere in this thread. Closest
matches are a Peter and a Patricia. My guess would be the former, due to
the common phrase "robbing Peter to pay Paul" that might lead them to be
more readily mixed up than the other possible pair, Paul and Patricia.

Not that it really matters, since it has nothing to do with hashes or
even with Java at all.

Anyway, I was obliquely asking for an explanation, if any rational
explanation existed, for the seemingly non-sequitur response to my post.
Since I have *not* read this group for a few weeks, I genuinely have no
idea what this is all about. But my guess at this point is that a) it
has nothing to do with Java, b) it has nothing to do with anything that
I wrote in the post to which Lew replied, and c) it has nothing to do
with rationality.

Therefore, it is probably best not to pursue the matter here, as this is
not the appropriate newsgroup. Lew's non-sequitur followup, in fact,
should probably have been posted elsewhere, or emailed to someone,
rather than posted here, since it appears to be completely irrelevant to
hashing integers and completely irrelevant to anything else in Java.

As for what is the most appropriate newsgroup, the first that occurs to
me (purely by guess) is alt.conspiracy, followed by its several close
relatives, then sci.psychology.psychotherapy, because the best place for
Lew to disclose paranoid fantasies (or whatever that stuff is) would
likely be to a qualified therapist, in my (uninformed by any medical
expertise) opinion.
 
P

public boolean

Arne said:
Lew said:
public said:
Eric Sosman wrote:
Nonsense. If the input consists of 100 zeroes, 10 ones,
and one each of 2..9, the hashes will consist of 100 h(0)s,
10 h(1)s, and one each of h(2..9). Nothing has changed,
except possibly for the worse if h(0)==h(6), say.

[... more pseudo-numerological nonsense ...]

Your insulting attitude in response to a perfectly civil post baffles
me.

Oh, Christ, it *is* Twisted again. I was afraid of that.

Quit hiding who you are, or are you embarrassed to admit it?

Re-plonk.

You.

You will need those GB for the kill file - he creates a new
identity almost every week.

Who does? I just did a quick search of this newsgroup for "author:
Twisted" and don't see anything after about 2006 or so.

Regardless, this appears to have nothing whatsoever to do with Java.
 
P

public boolean

Eric said:
I apologized, and still apologize, for the tone, as I said.
And I stand by the content: Your arguments are nonsense

Apologies are not likely to be taken seriously when you follow them up
by immediately re-committing the same act for which you were asked to
apologize.

When my kids apologize after I catch them with their hands in the cookie
jar, then ten minutes later I catch them with their hands in the same
jar again, I ground the little rotters for a week!
founded on the numerological superstitions

Your unwanted, off-topic, and hostile speculations about me are 100%
wrong. If you'd bothered to actually get to know me before passing
judgment you'd have discovered that I'm one of the least superstitious
people in existence.

My remarks about some numbers occurring, in practice, more commonly than
others are based on statistics and experience, not on Kabbalah or the
Revelation of John* or whatever other nonsense you seem to incorrectly
think to have been the basis.

* This is apparently its correct name, not "Revelations".
Evidence? We don' need no steenkin' evidence!

Experience. Take a look at the integer constants defined in your own
code sometime. You'll find some sequential runs, but anywhere you have
bit fields, initial sizes/capacities, buffer lengths, and whatnot you'll
find a preponderance of even numbers, and often larger powers of two,
numbers divisible by large powers of two (1920, in a HD display pixel
width; 2560 in some other setting; 256 in a file-header size) or by
powers of ten (so, two and five) (private static final int INITIAL_SIZE
= 100, and so forth).

The numbers that come up in actual practice are not uniformly
distributed; they statistically clump near some values, especially zero,
and prefer to have many factors and especially powers of two.

This in turn affects the distribution of any naive hash function of
same, making that clumpy.

A clumpy hash function remains somewhat clumpy under simpler
bit-twiddling massaging and may result in an elevated rate of collisions
when placed in any array of hash buckets smaller than 2147483647 or so.

There is nothing whatsoever "numerological" about this. It is
statistics, experience, and common bleeding sense.
Tou clearly know very little (not an insult, but a fact
subject to objective verification).

About Sun's licensing? My concern was more towards learning the language
itself. I do recall some big scary license document about not sharing
copies of the JDK's contents willy-nilly, which seemed silly since Sun
apparently lets any Tom, Dick, and Harry download it from their servers,
but legalities are legalities, however silly they might seem to a
non-lawyer such as myself.
Have you ever installed Sun's JDK?

Of course. Does anyone post here that hasn't? Aside from the damn
spammers, of course.
What's funny? Either Sun will sue, in which case there's
nothing funny about it, or you're just being silly,

which is funny.
Yes, the tone of this paragraph is antagonistic, but IMHO you
are begging for it.

Wrong again. I have learned something useful, however: you have no sense
of humor. So I won't bother trying to be funny here again, lest I be
insulted by you again for my efforts. Even though there remains at least
the theoretical possibility that one or more other people here might
actually appreciate my humor. I guess because of you they get to lose
out. Well, at least now they know who to blame for that.
*No* function of non-random data can eliminate the non-
randomness.

That's not the point. It's the non-uniformity, or clumpiness, that bears
smoothing out, since clumps are likely to be assigned to a relatively
small subset of hash buckets and thus have relatively many bucket
collisions internally. (Remember that even non-colliding hashCode values
may result in bucket collisions when placed in an actual hash table of
any size smaller than 2^32 -- so, probably, any size at all, since it's
likely the limit is half that. The risk grows the smaller the actual
hash table. The risk of avoidable collisions in clumps grows likewise.)
No, not even the multiplication by phi that catches your fancy.

That multiplication reduces clumpiness in the output to a minimum, given
a certain clumpiness of the input, short of using a cryptographically
secure hash that would probably be significantly slower to compute. I've
already given a sketch of the mathematical explanation for why.
"Would work better?" Sort of depends on (1) the internals
of the hash implementation, of which you profess ignorance,

Unless it's being massaged by a similar multiplication, or fed through a
cryptographic routine, it's doubtful that clumpiness of the input
doesn't elevate the rate of bucket collisions.
and (2) the distribution of the inputs, for which you offer only
unsupported claims that you "stand by, nonetheless." Evidence?
We don' need no steenkin' evidence!

They are not unsupported. They should honestly be self-evident to anyone
with much experience working with numbers in any kind of actual applied
context. The numbers we actually use are not uniformly distributed.
Randomly pick an integer from those you've used today and odds are good
it will be much smaller than 2147483647, fairly good that it will be
positive, fairly good that it will be even, and fairly good that it will
be either single-digit or have relatively many factors.

You seem to be suggesting that you take seriously the notion that an
integer that actually arises in practice is equally likely to be any of
the 4294967296 possible int values in Java.

Since your hostility towards my claims does not make any sense otherwise.

Essentially, you say that my claim that an integer that actually arises
in practice is *not* equally likely to be any of the 4294967296 possible
int values in Java is "unsupported" and therefore we should not believe it.

I say that you are being ridiculously anal to demand some high standard
of evidence for what, begging your pardon, is the bleeding obvious!
So your fears of overloading the even-numbered buckets (if
we're to believe your no-steenkin'-evidence claim about the
prevalence of even numbers) are diminished by a factor of four
at the least, right?

That bit-twiddling will make the clumpy distribution different and
somewhat less clumpy, but it will not eliminate the clumpiness, or even
the tendency for many of the bits to be zeros (many more than half of
them). These will have their (statistical) effects. Those effects won't
be as bad as in the worst-case, but won't be zero either.
I'm sorry, but I'm unable to extract sense from this
paragraph.

I'm sorry, but you're unable to extract sense period, as near as I have
been able to determine.

Seriously. If you honestly think there's a snowball's chance in hell
that all 4294967296 int values actually occur with equal frequency in
typical production software, then you have no business posting to this
newsgroup and probably have no business being put in charge of designing
any tool more important than a Nerf-branded one.
You seem to be saying that List and StringBuilder and Dimension
should not have .equals() methods

More that Dimension should be immutable, mutable-List and StringBuilder
equals should be ==, and there should be an immutable List with the
present List.equals for its equals.

Equality with X, and hash code, should really be lifetime-constant
properties of a thing, after all, for mathematical reasons and to get
rid of the mutable-keys-in-a-map-can-screw-it-up problem.
Perhaps when I'm older and wiser I'll understand you, but for the
moment the logic of your contention eludes me.

That's okay. I am routinely in the position of not being understood by
those younger and less wise than I -- I have kids. I've had worse lip
from them, too, than I've so far received from you. :)
 
P

public boolean

Joshua said:
"Premature optimization is the root of all evil."

The best hash functions require careful tuning for the distribution of
inputs. If another input distribution actually happens, the same hash
functions might present horrendous results. My set of integers probably
has a vastly different distribution from your set of integers.

It seems unlikely that multiplying by a (negation of a) prime close to
(1.61803398*2147483648)-4294967295 is going to ever make anything worse.

Since the factor is, by hypothesis, prime to the modulus of int
arithmetic, it will map unique inputs to unique outputs, i.e. induces a
one-to-one mapping of the int values. And it will make your typical
clumpy input distribution into a fairly uniform output one, cheaply
though not cryptographically-strongly*.

On the flip side, about the only distribution that will be made worse by
it (clumpier and with more bit-correlations, though no narrower) will be
the one you get by multiplying the usual distribution by the prime's
inverse modulo 4294967296.

That is quite probably the least likely input distribution of all that
might occur in practice.

And if you ever do run into it, you can always use a wrapper class. ;)

* For secure hashing, Java provides specialized classes already. It can
compute md5 and sha hashes of binary data, at minimum, and these can be
reduced to 32 bits. It just can't do it as quickly as desired for
general hash-table-use purposes, where security of the hash function
against reversibility and deliberate collision-engineering is typically
not a concern, only minimizing the frequency of accidental collisions.
 
P

public boolean

Joshua said:
Sun open-sourced their implementation of Java to create OpenJDK which,
starting with version 7, will be the official Java.

But it is not yet.

The Java I'm aware of and that I'm discussing is Java 5, actually. I
probably should get Java 6. I heard it had problems with RMI and one or
two other things though.
I have referred numerous times to the OpenJDK codebase, even providing
links to a few files.

It looked like you were referring to some third-party Java
implementation. It has been my experience that usually when large
business X has a non-open-source product Foo and somewhere there's an
OpenFoo, the latter is a community-created at-least-somewhat-compatible
clone not endorsed or supported by X. Put Foo = JDK5 (or 6) and you'll
see where I'm coming from.

I guess, though, at least starting with version 7, OpenJDK won't be
third-party, so code in it is going to be core Java code in the
relatively near future, making it an exception to the rule.
 
P

public boolean

Arne said:
Don't expect "public boolean" to know anything about Java.

I know quite a lot about Java, thank you very much.

*google google*

But I'm less sure about you. After this gratuitous and uninformed
insult, following two completely off-topic posts by you, I decided to
have a look at your posting history here, and it seems that you prefer
to spend your time insulting and browbeating people rather than actually
discussing Java very much at all.

Now I wonder what the purpose of your participation in this newsgroup is
-- and what you think the purpose of this newsgroup is!

But I guess that is off-topic here. Follow-ups set appropriately.
 
J

Joshua Cranmer

public said:
I guess, though, at least starting with version 7, OpenJDK won't be
third-party, so code in it is going to be core Java code in the
relatively near future, making it an exception to the rule.

OpenJDK never has been, is not, and never will be third-party software.
Period. Full stop. If you actually read the links I posted, you'll note
that it /is/ the Sun Java source code (with a few modifications to
eradicate closed-source libraries), and that Sun has committed it to
being the basis for future Java releases.

The only reason that Java 6 is not based off of OpenJDK is that OpenJDK
was created too late in the Java 6 cycle.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,048
Latest member
verona

Latest Threads

Top