Liz said:
Theoretically, if the JVM can tell that the array index cannot get
out of range it can skip the check, e.g.
for(i=0; i<array.length; i++) {
// do it
}
Maybe someone can check if it does.
The JVM bytecodes for array access do not have an option to turn off
range checking, so I don't see how this could be done.
Personally, I'd be very surprised if the few extra machine cycles
needed to check the array bounds are an issue. Here are some other
possibilities, depending on the type of the array:
For arrays of non-primitive objects: the aastore instruction has to
validate that the type of the object being stored is
assignment-compatible with the type of the array (ArrayStoreException
checking).
For double[]: value-set conversion may be needed. In an Intel
architecture, double-precision floating-point computations are
performed in 80-bit precision, which will have to be converted to
64-bit IEEE format to be stored in a double or an element of double[].
Similarly, loading a double or an element of a double[] may require
converting to the 80-bit value set. (This discussion ignores
"strictfp", which can be very hard on performance in an Intel
architecture.)
For byte[] or boolean[] (theoretical issue): IF the JVM implements
boolean[] as a packed bit-array, the baload and bastore instructions
have to examine the type of the array to determine how to load/store
the data, and access to elements of a boolean[] require masking and
shifting. I say this is "theoretical" because this is NOT an issue
with Sun JVMs; they store booleans and bytes exactly the same.
Again, I seriously doubt that the index range-checking is responsible
for the performance issues. Once the performance problem has been
isolated to specific code sections, an investigation of the bytecodes
being generated for those sections will probably reveal what's going
on.