Inline compilation / HotSpot

M

Marc Dzaebel

given:

int f1(int i) { return add1(i); }
final int add1(int i) { return i+1; }

Is the above f1() as efficient as:

int f1(int i) { return i+1;}

Does this depend on the compiler?

In http://www.javaperformancetuning.com/tips/final.shtml it seems that add1
should always be inlined after the methods are fully loaded into the JVM.

Thanks, Marc
 
C

Chris Uppal

Marc said:

Rather a dubious page of rather dubious tips. I should ignore it if I were
you.

int f1(int i) { return add1(i); }
final int add1(int i) { return i+1; }

Is the above f1() as efficient as:

int f1(int i) { return i+1;}

Does this depend on the compiler?

It depends on the JVM (not on javac). If the JVM has a JIT (or similar) which
does inlining, and if the JIT chooses to inline that particular method call,
then they will have comparable efficiency.

In practise the current line of JVMs from Sun are pretty agressive about
optimisation in general (especially the server JVM) and I'd imagine that there
is a good chance that it will inline -- assuming that f1() is invoked often
enough for the JIT to consider it worthwhile.

Looking at it even more pratically. It's unlikely to make much difference (you
are talking about a very fast operation vs. an extremely fast operation -- the
absolute difference is tiny). If it /does/ make a significant difference
then -- by definition -- it will be easy to detect.

-- chris
 
O

Oliver Wong

Chris Uppal said:
It depends on the JVM (not on javac).

Is there something the JLS (or elsewhere) that forbids the compiler from
performing these kinds of optimizations?

- Oliver
 
C

Chris Uppal

Oliver said:
Is there something the JLS (or elsewhere) that forbids the compiler
from performing these kinds of optimizations?

I suspect that the binary compatibility stuff might make it effectively
impossible for most cases. The only opportunities I think would survive would
be calls to private, static, or final, methods in the same class. By "the same
class" I mean the /same/ class ;-) Nested classes, etc, don't count.

The "binary compatibility stuff" is basically how the dynamic nature of the JVM
is reflected in the Java language spec. I haven't checked how the spec works
for this question, but since the entire section pretty much amounts to
"remember that classfiles can change between compile time and runtime", I would
guess that other inlining optimisations would be ruled out.

The JVM can optimise a lot harder, because it has /all/ the relevant
information available at any given moment.

I suppose that a Java-like architecture could be defined where the compiler
emitted two sets of bytecode, one generated on the assumption that nothing
significant would change (and the classfile would include a list of what
the exact assumptions were), and the other very conservative in its
assumptions. The JVM could then select which bytecode to use depending on
whether the assumptions turned out to be valid at runtime. Static compilers
like Excelsior JET could use similar techniques, and -- for all I know -- maybe
that's what they do...

-- chris
 
T

Thomas Hawtin

Chris said:
In practise the current line of JVMs from Sun are pretty agressive about
optimisation in general (especially the server JVM) and I'd imagine that there
is a good chance that it will inline -- assuming that f1() is invoked often
enough for the JIT to consider it worthwhile.

The one big catch is that if the calling method is very large, it wont
have trivial methods inlined into it. If you write your code sensibly
and don't optimise prematurely, you should be okay.

Which implementation you use does matter. AIUI, the current Sun client
VM is not very good with methods that are overridden in some loaded
classes but not in the objects in use.

Tom Hawtin
 
L

ldv

Chris said:
I suspect that the binary compatibility stuff might make it effectively
impossible for most cases. The only opportunities I think would survive would
be calls to private, static, or final, methods in the same class. By "the same
class" I mean the /same/ class ;-) Nested classes, etc, don't count.

In fact, HotSpot does a lot of so called speculative optimizations, and
so do some other JIT compilers. Suppose the HotSpot engine decides to
JIT compile a method BigBar() and detects that a certain method
SmallFoo() is worth inlining into BigBar(), but SmallFoo() is not
declared as final. If SmallFoo() is not overridden in any of the
classes loaded so far, HotSpot will inline it into BigBar() and compile
BigBar() to native code. If at some later point a class overriding
SmallFoo() gets loaded, HotSpot will simply discard the results of
BigBar() compilation and BigBar() will again run on the interpreter.
The "binary compatibility stuff" is basically how the dynamic nature of the JVM
is reflected in the Java language spec. I haven't checked how the spec works
for this question, but since the entire section pretty much amounts to
"remember that classfiles can change between compile time and runtime", I would
guess that other inlining optimisations would be ruled out.

The JVM can optimise a lot harder, because it has /all/ the relevant
information available at any given moment.

Basically, at any moment of time the JVM can make whatever assumptions
and do whatever optimizations that are correct for the set of classes
loaded at that moment, provided it will be able to undo any such
optimization should loading a new class break the respective
assumptions.
I suppose that a Java-like architecture could be defined where the compiler
emitted two sets of bytecode, one generated on the assumption that nothing
significant would change (and the classfile would include a list of what
the exact assumptions were), and the other very conservative in its
assumptions. The JVM could then select which bytecode to use depending on
whether the assumptions turned out to be valid at runtime.

I'd say there is no need for such architecture, the JVMs are already
smart enough.
Static compilers
like Excelsior JET could use similar techniques, and -- for all I know -- maybe
that's what they do...

Excelsior JET uses a somewhat different technique for the above
scenario. Being a static compiler, it cannot undo optimizations, so it
implements a very fast runtime check for inlined non-final methods.
That is, if the instance method supposed to be called at the given
point is indeed the method that was inlined during static compilation,
the inlined copy will be executed, otherwise a virtual call will occur.

LDV

http://www.excelsior-usa.com/jet.html
 
T

Thomas Hawtin

In fact, HotSpot does a lot of so called speculative optimizations, and
so do some other JIT compilers. Suppose the HotSpot engine decides to
JIT compile a method BigBar() and detects that a certain method
SmallFoo() is worth inlining into BigBar(), but SmallFoo() is not
declared as final. If SmallFoo() is not overridden in any of the
classes loaded so far, HotSpot will inline it into BigBar() and compile
BigBar() to native code. If at some later point a class overriding
SmallFoo() gets loaded, HotSpot will simply discard the results of
BigBar() compilation and BigBar() will again run on the interpreter.

That's certainly not how the server version of HotSpot (C2) works.

Tom Hawtin
 
M

Marc Dzaebel

Hi Thomas,

I'd be interested in the different behaviors of the server and client VM
with respect to inlining. As of today, I assume that both the client and
server VM reasonably handles inlining. That means, that my framework design
could break methods into pieces without suffering from performance
degradation.

Thanks, Marc
 
C

Chris Uppal

(e-mail address removed) wrote:

[me:]
I suspect that the binary compatibility stuff might make it effectively
impossible for most cases. [...]

In fact, HotSpot does a lot of so called speculative optimizations, and
so do some other JIT compilers.

By "the compiler" I meant javac, and I assumed (perhaps wrongly) that Oliver
did too. Looking back, I can see that I didn't make that clear. Apologies for
the confusion.

I'd say there is no need for such architecture, the JVMs are already
smart enough.

Agreed. Still, not all VM implementations are, or can be, that smart.

Excelsior JET uses a somewhat different technique for the above
scenario. Being a static compiler, it cannot undo optimizations, so it
implements a very fast runtime check for inlined non-final methods.
That is, if the instance method supposed to be called at the given
point is indeed the method that was inlined during static compilation,
the inlined copy will be executed, otherwise a virtual call will occur.

Interesting, thanks.

-- chris
 
C

Chris Uppal

Thomas said:
The one big catch is that if the calling method is very large, it wont
have trivial methods inlined into it.

Seems odd. I can see that there might be a statistical tendency for large
methods to be optimised less well than small ones -- the bigger they are then
the greater the chance they'll contain some construction, or combination of
constructions, that impede optimisation. But I can't see a reason for not
inlining /only/ because the method is already large. Do you know what the
justification is ? (Or can you point me to the place in the JVM source where
this check is implemented ?)

Which implementation you use does matter. AIUI, the current Sun client
VM is not very good with methods that are overridden in some loaded
classes but not in the objects in use.

Again, do you have a reference (in the source or elsewhere) for that ? I'm not
doubting you, I just want to follow it up.

-- chris
 
M

Marc Dzaebel

D

dimitar

does not say "HotSpot Client", but just "HotSpot". The article is
rather old, though.

From what I know, the only difference in the Client and the Server
versions of HotSpot is the actual thresholds used for different
optimizations.
 
O

Oliver Wong

Chris Uppal said:
(e-mail address removed) wrote:

[me:]
I suspect that the binary compatibility stuff might make it effectively
impossible for most cases. [...]

In fact, HotSpot does a lot of so called speculative optimizations, and
so do some other JIT compilers.

By "the compiler" I meant javac, and I assumed (perhaps wrongly) that
Oliver
did too. Looking back, I can see that I didn't make that clear.
Apologies for
the confusion.

Yeah, I was talking about javac too. I know HotSpot can do all sorts of
crazy stuff. I was just surprised to find out that javac can't do some of
that stuff ahead of time of HotSpot.

- Oliver
 
L

ldv

Oliver said:
Yeah, I was talking about javac too. I know HotSpot can do all sorts of
crazy stuff. I was just surprised to find out that javac can't do some of
that stuff ahead of time of HotSpot.

Optimizing bytecode ahead-of-time can negatively affect the JIT
compiler's optimziation abilities. javac used to have -O option before
the first Java 2 release (J2SE 1.3), where HotSpot was introduced. To
be more precise, the option was left in place to ensure backward
compatibility of the various tools that invoke javac, but the emitted
code was the same regardless of whether -O was specified. :)

LDV
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,008
Latest member
obedient dusk

Latest Threads

Top