Odd performance difference between -client and -server

R

Remon van Vliet

Hello,

I've run into an odd performance difference between the client and the
server VM. I made a few classes for 3D math and such, and here are two
version of a scale method :

public final static RTVector scale(RTVector v, double s) {

/* Scale vector and return result */
return new RTVector(v.x * s, v.y * s, v.z * s);
}

public final static RTVector scale(RTVector r, RTVector v, double s) {

/* Scale vector */
r.set(v.x * s, v.y * s, v.z * s);

/* Return result vector */
return r;
}

As you can see, one creates a new vector and returns the result, the other
sets the result in a third vector that's passed to the method. The latter
version should be faster since it doesnt create a new object (note that i
made sure the test isnt creating a new object each iteration either). Now,
for the server VM (-server) all works as expected, for 10000000 runs :

option1 : 0.188s
option2 : 0.032s

The client VM however :

option1 : 0.579s
option2 : 9.547s

As you can see the server VM is way faster for this, which is expected
behavior. What is odd to me is that the option where no new objects are
created is actually a factor 20 slower on the client VM. Does anyone have an
explanation for this? Note that the only difference for these tests is the
VM command line argument -client/-server.

Hope someone can shed some light on this,

Remon van Vliet
 
S

Skip

Remon van Vliet said:
Hello,

I've run into an odd performance difference between the client and the
server VM. I made a few classes for 3D math and such, and here are two
version of a scale method :

public final static RTVector scale(RTVector v, double s) {

/* Scale vector and return result */
return new RTVector(v.x * s, v.y * s, v.z * s);
}

public final static RTVector scale(RTVector r, RTVector v, double s) {

/* Scale vector */
r.set(v.x * s, v.y * s, v.z * s);

/* Return result vector */
return r;
}

As you can see, one creates a new vector and returns the result, the other
sets the result in a third vector that's passed to the method. The latter
version should be faster since it doesnt create a new object (note that i
made sure the test isnt creating a new object each iteration either). Now,
for the server VM (-server) all works as expected, for 10000000 runs :

option1 : 0.188s
option2 : 0.032s

The client VM however :

option1 : 0.579s
option2 : 9.547s

As you can see the server VM is way faster for this, which is expected
behavior. What is odd to me is that the option where no new objects are
created is actually a factor 20 slower on the client VM. Does anyone have an
explanation for this? Note that the only difference for these tests is the
VM command line argument -client/-server.

Welcome to the world of micro-benchmarking.

The JVM is smarter than you think. It might notice that the new Object in
option 1 is never used, and thus eliminating that part of the code,
resulting in doing 'nothing' which is - well - extremely fast. In option 2,
there is no possibility to optimize (say: completely remove) that code for
the JVM, as it is not "never used" as in option 1.

So basicly:
option 1 does nothing
option 2 does what it is supposed to do

Further, the server VM is really really smart, and might notice that in your
micro-benchmark has no result in either case. Thus doing nothing at all.


Could you please show the micro-benchmark code?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top