Cost of creating objects?

A

Arved Sandstrom

Send him back to the 9970's or 1980's - there were great use
for developers like him back then.

Today that type of stuff typical make large chunks of code
unreadable and saves little on overall execution time.

But you may find it difficult to convince him. Microoptimizers
believe very strongly in their religion.

You can suggest every time he wants to change something that
you measure the real application before and after the change.

My guess is that he will find an excuse for not doing that and
refer to his own micro-benchmark.

Arne
The one optimization that makes sense at initial coding time, before
profiling, and improves execution time also by-the-by, does relate to
how many objects you create in one implementation approach versus another.

What I'm really getting at here is things like thinking about scope of
objects, how Strings are built, what's happening in loops etc. This is
not micro-optimization, like the silly stuff raised by the OP, but just
good coding. Not observing certain "optimizations" (read "being sloppy
with object creation") can bring a JVM to its knees if it's running an
app server with several apps, and only 4 or 8 gigs of RAM are available.

Invariably, when I've cleared up object creation problems, I've sorted
execution speed problems too.

All of the stuff I suggest is the type of optimization that is discussed
in good programming books as good programming, and it certainly does not
make the program less clear or less maintainable. It's also not Knuthian
premature optimization.

AHS
 
A

Arved Sandstrom

Most automatic Java profilers are a waste of effort. There are two
methods supported by the JVM:

1) Profiler instrumentation. This rewrites methods to contain timing
calls. This rewriting and data collection breaks all the optimizations
that are critical to Java performing well. Most can only collect data
into a single thread so all concurrency is gone too. These only work
when manually configured to target very specific points of code.


2) Sampling. This takes rapid stack snapshots of each thread and
collects statistics. It's simple and you can even build a JSP to do it.
This also doesn't work for performance benchmarking because snapshots of
native code require threads to stop at a safepoint. When HotSpot is
doing a good job, safepoints come at regular intervals in the optimized
native code, not your source code. When I use sampling profiling on a
project at work, Integer.hashCode() sometimes leaps to the #1 spot.
There's not actually any code in that method and it's not called very
frequently, but often a safepoint's native address maps to that source
in the debug symbol table. Sampling is best for finding code that
pauses (I/O, semaphore, waiting for resource, etc.) for unexpectedly
long times.

As for the original question, variable declarations mean nothing in
compiled code. They're just for humans. At times when AttrValue is
known to have only one possible implementation, HotSpot may even inline
the methods and use direct field access. Later when AttrValue may have
more than one implementation, HotSpot can go remove that optimization.
A good alternative to profiling is a related approach, code coverage.
Just exercise your app thoroughly with code coverage instrumentation,
you don't care about timing or performance statistics at all. You
probably already know when your app is slow, so you can exercise just
those features that are slow.

Once you've done that and have info on what code is getting hammered the
most, it's visual inspection time. Just look at what gets hit most and
what you are doing there.

AHS
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top