Alternative to System.runFinalizersOnExit()?

T

Twisted

As I'm sure you're all aware, System.runFinalizersOnExit() is
deprecated as it can cause erratic behavior or deadlocks.

This trick might have an equivalent effect without the danger:

* When your application is exiting, send clean shutdown messages to all
the other threads.
* Close and dispose any GUI components.
* Wait for all but the main thread and now-vacuous event dispatch
thread to disappear, and perhaps throw InterruptedExceptions into any
that linger too long.
* Then clear all local and global (i.e. static) reference variables and
invoke System.gc().
* Wait a short while, and then actually exit.

The idea is to make just about nothing strongly-reachable anymore and
let the gc loose so that objects are deallocated, finalizers get run,
and data structures get rolled up and put away in a "natural" and
orderly fashion.

Any comments?
 
T

Thomas Hawtin

Twisted said:
As I'm sure you're all aware, System.runFinalizersOnExit() is
deprecated as it can cause erratic behavior or deadlocks.

Use only libraries that don't need finalisers to run.

Always close your resources (acquire; try { ... } finally { release; } -
be careful you don't do anything extra after acquiring, such as wrapping
in a buffer decorator).

If you need to do anything on exit, use a shutdown hook.

Tom Hawtin
 
T

Thomas Hawtin

Twisted said:

// WRONG:
final OutputStream out = new BufferedOutputStream(
new FileOutputStream(file)
);
try {
...
out.flush();
} finally {
out.close();
}

// STILL WRONG (and daft):
OutputStream out = null
try {
out = new BufferedOutputStream(
new FileOutputStream(file)
);
...
out.flush();
} finally {
if (out == null) {
out.close();
}
}

What if construction of the BufferedInputStream failed due to lack of
memory, or some other reason? Leak. Therefore:

final OutputStream rawOut = new FileOutputStream(file);
try {
OutputStream out = new BufferedOutputStream(rawOut);
...
out.flush(); // Also important, buffered or not.
} finally {
rawOut.close();
}

It's a narrow window, but it could mean the difference between dying
under load or just suffering.

Tom Hawtin
 
T

Twisted

As a rule, if construction of a BufferedInputStream fails, you're
already sunk. Either something has gone seriously wrong or you're flat
out of memory and the VM is probably going to die in a moment anyway.
(Most of the times, OutOfMemoryError is very shortly followed by the VM
self-destructing, even if you catch it and try to recover somehow such
as by flush a load of caches and telling half your threads to die.)

There's an unfortunate tendency of Java not to run stably for a long
time -- it slowly exhausts its heap as any nontrivial app on average
creates new objects faster than the gc can clear old ones, even given
that the net amount of space tied up in live objects stabilizes. Once
that's happened, the VM tends to implode no matter what you do, so I've
taken to detecting low memory and gracefully exiting.

There seem to be some exceptions -- Eclipse itself is stable for days
and only slowly bloats and becomes less responsive, for example -- but
I don't know how they manage it. Even with the incremental garbage
collection in the newer VMs, the constant churn of small objects like
Strings and boxed primitives eventually overwhelms the gc's ability to
cope and kills the system.

It is likely that Eclipse gets away with days of uptime because a) when
it's unattended it's actually outright idle and b) when it's in use
most user activity results in modifying a variety of StringBuffers at
the backend and some disk files, rather than producing small object
churn.

Any network application, on the other hand, is constantly churning
Strings, URLs, various FooStreams and BarConnections, and all the lower
level stuff like Sockets and RequestHeaders that the programmer doesn't
even see touch their own code directly. Not to mention the assorted
Map.Entries and so forth involved in the data structures that
invariably hold the business logic's more persistent state, which for
these application types changes rapidly and continually even without
explicit user intervention...

A Web browser gives a simple example. A new, huge String is probably
involved in every page load, and some ad-encrufted sites will spawn as
many as 50 assorted Image instances *per pageview*. Multiply this for
any use of frames, throw in a URL and an assortment of I/O objects for
every remote object (such as an HTML file or image) loaded, and the
churn of List entries in the history, plus cookies and Christ alone
knows what else, and you're quickly looking at the browser blowing up
on the 1000th page load or so and getting noticeably slower by the
500th due to the gc.

Of course, you can internally reuse non-immutable objects (and that
includes strings and primitive types, if you use things like int[1]s
and StringBuffers in place of ints or Integers and Strings,
respectively) from a pool, but then you're simply copying gc
functionality and cluttering up your business logic with memory
management logic. Now you've got the explicit memory management and
accompanying logic clutter of C/C++ *and* the gc overhead to content
with -- also known as "the worst of both worlds".

I just hope that the evolution of the 1.6 beta is going to launch a
revolution in garbage collection that rescues Java from the mess that
every application creates of its heap in a relatively short amount of
time. Ultimately, I think these problems plague non-Java software too
-- particularly a problem I haven't even mentioned until now, heap
fragmentation -- and are responsible for a lot of flaky behavior and
crashes. Internal leaks of handles and memory are par for the course in
any Windows app, Java or otherwise, for instance. Fragmentation from
churn in small allocations and deallocations also isn't unique to Java.
And of course Java saves us from the worst memory management related
bugs, such as uninitialized data being used, or deallocated space being
referenced.

Ultimately, I'm not sure what the fix is. Even cleverer treatment of
small objects, perhaps, such as to give them their own heap in two
chunks, and when one is full start filling the other and defragmenting
and garbage collecting in the first. Move out and consolidate objects
that have persisted, to make room ... only I think the incremental gc
is already supposed to be doing something like this, and still my Java
apps tend to gradually grind to a halt...
 
R

Red Orchid

Message-ID: said:
// STILL WRONG (and daft):
OutputStream out = null
try {
out = new BufferedOutputStream(
new FileOutputStream(file)
);
...
out.flush();
} finally {
if (out == null) {
out.close();
}
}

What if construction of the BufferedInputStream failed due to lack of
memory, or some other reason? Leak.


( I assume that the above "if (out == null)" is mis-typed out.)


Why still wrong ?

Though an exception is thrown because of some reason,
'finally' block will be executed.

For the worst.
Even if the lack of memory happens, 'file' resource will be released.

Execute the following code.
Even if OOME is thrown, the lock of "a.txt" is released.



<code>
void test() throws Exception {

OutputStream out = null;

try {
RandomAccessFile lock;
File f;

f = new File("a.txt");
lock = new RandomAccessFile(f,"rw");

if (lock.getChannel().tryLock() != null) {

System.out.println("Lock OK");
}

f = new File("b.txt");
out = new FileOutputStream(f);
out = new TBufOutputStream(out);

out.flush();
}
catch (Exception e){

e.printStackTrace();
}
finally {

if (out != null) {

out.close();
}
}
}

class TBufOutputStream extends BufferedOutputStream {

public TBufOutputStream(OutputStream out) {
super(out);
long[] arr = new long[Integer.MAX_VALUE];
}
}
</code>
 
T

Thomas Hawtin

Red said:
( I assume that the above "if (out == null)" is mis-typed out.)

Yes, that is a mistake!
Why still wrong ?

Though an exception is thrown because of some reason,
'finally' block will be executed.

For the worst.
Even if the lack of memory happens, 'file' resource will be released.

'file' is not a resource. new FileOutputStream(file) is. If the
BufferedOutputStream throws an OOME, then it will leak. There is no
local variable referring to the resource.

As a point of extreme obscurity, if the *allocation* of the
BufferedOutputStream object itself fails, then the FileOutputStream will
not be created. If the byte[] buffer allocation fails, then it will leak
resources.

Tom Hawtin
 
T

Thomas Hawtin

Twisted said:
As a rule, if construction of a BufferedInputStream fails, you're
already sunk. Either something has gone seriously wrong or you're flat

Running out of memory is not a crime. NetBeans likes to do it
frequently. If I do run out of memory, I don't want my work to be
trashed because the programmer was clueless. If I'm in a hurry, I might
even want to carry on. Particularly if it only OOMEd because I tried to
load a core into a text editor. There is no excuse for not behaving
gracefully.
There's an unfortunate tendency of Java not to run stably for a long
time -- it slowly exhausts its heap as any nontrivial app on average

Of Java programs, rather than Java itself. Often down to not taking care
over things like exception-safety.
even see touch their own code directly. Not to mention the assorted
Map.Entries and so forth involved in the data structures that

Even IdentityHashMap, which doesn't use Entry in its implementation
(it's a probing hash map, using pairs of array slots for key and value),
doesn't churn Entries.
A Web browser gives a simple example. A new, huge String is probably
involved in every page load, and some ad-encrufted sites will spawn as

I hope not.
is already supposed to be doing something like this, and still my Java
apps tend to gradually grind to a halt...

And so you should be more careful about behaving gracefully.

Tom Hawtin
 
T

Twisted

Thomas said:
'file' is not a resource. new FileOutputStream(file) is. If the
BufferedOutputStream throws an OOME, then it will leak. There is no
local variable referring to the resource.

If the BufferedOutputStream throws an OOME, then it will leak a little
dribble after it's evidently already lost an ocean. Put it in
perspective will you? :)
 
T

Twisted

Thomas said:
Running out of memory is not a crime. NetBeans likes to do it
frequently. If I do run out of memory, I don't want my work to be
trashed because the programmer was clueless. If I'm in a hurry, I might
even want to carry on. Particularly if it only OOMEd because I tried to
load a core into a text editor. There is no excuse for not behaving
gracefully.

Even if you tried to load a core into a text editor it wouldn't die in
the BufferedInputStream constructor from having tried to allocate a
buffer there the size of the whole file. It would die growing the
StringBuffer underlying the edit box or it would read the file size and
die allocating a huge char array up-front.

But in my experience, OOMEs are almost always thrown while allocating
the straw that breaks the camel's back, rather than something with the
mass of a small moon. Certainly, failure to allocate a 2KB buffer in
the BufferedInputStream constructor is going to be an example of the
former, and if that's the case, it was already in a "destined to blow
up in your face in the next five minutes" state before it reached that
line of code.
Of Java programs, rather than Java itself. Often down to not taking care
over things like exception-safety.

Not my code, which is chock fill of try ... finally.
Even IdentityHashMap, which doesn't use Entry in its implementation
(it's a probing hash map, using pairs of array slots for key and value),
doesn't churn Entries.

Still churns keys and values.
I hope not.

Well, you can do three things with a Web page. Parse it on the fly as
it streams in, outputting a baroque data structure to render. Result: a
baroque data structure with enough overhead to be twice the size of the
String. Or, read it into a huge String (by way of a huge StringBuffer).
Or, read it, parse it on the fly, and discard the bulk of what you get.
Possibly by building it into a bitmap of some sort, which will be
bigger than the String and a lot less useful, or into a vector graphics
representation, in which case see the first option (baroque data
structure) only without the neat separation of content from
presentation.

Only the String or data structure enable the result to be
text-searched. Only those are likely to be sensible unless you render a
thumbnail of the page for some kind of archival purpose or are
stripping it for a single important element (the favicon.ico link, say)
or mining it for a particular thing (a price in a predictable place on
the page -- you can parse on the fly and discard nearly everything for
that kind of use, but then Incredibill will come after you with
Crawlwall, handcuffs, and a rubber phallus, slam you face-first into
the former, and then <censored>).

Which you use depends on what you're doing. If you're just downloading
the page or precaching it or something you may as well use a compact
String until it's needed. If you're rendering it, you may want the
whole javax.swing.text.html nine yards, or perhaps <insert the name of
the XML Package of the Week(tm) here>.
And so you should be more careful about behaving gracefully.

Such as? One particular app I'm thinking of does relatively little disk
I/O and doesn't use the construct that bothered you so much for any of
that. It has cleanup finally clauses that may actually constitute the
majority of the LOC, after of course ignoring blank lines, comments,
and javadoc. These do, in fact, clean up any open FooStreams.

The persistent, growable data body involves mostly HashFoos with quite
a lot of entry churn but no long-term growth trend (most are programmed
to discard older entries when they exceed a certain size -- this
produces better behaved caches than SoftReference on current JVMs IME).
Debugging doesn't show any accumulation of deadwood either (in the form
of stale threads left in infinite loops or whatever other form it might
take). So, unless the JVM is doing something really dumb, like not
cleaning up non-strongly-reachable circular data structures ...
 
E

Eric Sosman

Twisted wrote On 11/14/06 16:17,:
If the BufferedOutputStream throws an OOME, then it will leak a little
dribble after it's evidently already lost an ocean. Put it in
perspective will you? :)

The important thing isn't the ocean that's already
been lost, but the tiny dribble that remains. It's that
little dribble that will -- or won't -- suffice to get
you out of Death Valley (a part of California that was
once completely underwater but is now arid and HOT).

You're probably not going to get very far after the
JVM throws an OOME at you, and most Java applications will
just die as you've described elsethread. A few, though,
will catch the OOME and attempt a clean shutdown rather
than an uncontrolled exit. That last-ditch effort will
operate under conditions of extreme memory scarcity, and
the little dribble you disparage might tip the balance.

It is frightening and unpleasant to make an emergency
landing on a foam-covered runway flanked by fire and rescue
vehicles (or so I'm told by an acquaintance who's done it,
not once but twice). Nonetheless, she'd rather do it a
third time than experience even one crash.
 
T

Twisted

It is frightening and unpleasant to make an emergency
landing on a foam-covered runway flanked by fire and rescue
vehicles (or so I'm told by an acquaintance who's done it,
not once but twice). Nonetheless, she'd rather do it a
third time than experience even one crash.
</hyperbole>

Whoa. That pegged the meter all the way to the right, 12 on the
Beaufort scale.

There's really no comparison. Unless it's in medical or nuclear
engineering, lives aren't at stake if a Java app shuts down badly or
crashes. In fact, I've grown so used to not being able to count on
successfully making a clean exit after an OOME that my apps a) try to
detect low memory a little sooner, by looking for first
runtime.maxMemory() closing in on the -Xmx value the app is used with,
and then runtime.freeMemory() dropping below a threshold, and b)
periodically autosave important state so that not much is lost when it
eventually does die (notice "when", not "if").

For user-edited data, a versioning autosave would be best, of course,
with perhaps a rotating deletion of older entries once there's a
certain number to prevent one of the common causes of disk space
leakage (the others being DLL creep, malware if you're prone to it,
unavoidable software (such as Windows) that leaks temp files, and
unavoidable software with showstopper bugs (such as Windows) that leak
core files (or equivalent). (No, that's not just a unix thing -- check
your C: directory for "minidump" and similar filenames sometime. Gasp
in awe at some of their sizes. Note that the number of them correlates
with the number of blue screens or spontaneous reboots you've had since
the last time you cleaned them out...)
 
E

Eric Sosman

Twisted wrote On 11/14/06 17:13,:
</hyperbole>

None. She landed on foam, twice. Not a big fan of
air travel any more ...
Whoa. That pegged the meter all the way to the right, 12 on the
Beaufort scale.

There's really no comparison. Unless it's in medical or nuclear
engineering, lives aren't at stake if a Java app shuts down badly or
crashes. [...]

Lives are the only thing worth protecting?

Twisted, you're trying to have both sides of the
argument at once. First you say it's pointless to take
the trouble to position try/finally correctly because
there'll be no harm done unless the JVM is already in
irredeemably dire straits. And then you describe the
considerable efforts you make to avoid just that sort
of harm. What's the message? That it's all right to
be careless if you're careful?
 
T

Twisted

Eric said:
What's the message? That it's all right to
be careless if you're careful?

No, the message is that I can't be arsed to try to argue properly with
a tosser like you, particularly not at this hour. Goodnight.
 
C

Chris Uppal

Thomas said:
'file' is not a resource. new FileOutputStream(file) is. If the
BufferedOutputStream throws an OOME, then it will leak. There is no
local variable referring to the resource.

Remember that FileOutputStreams are finalisable. As a practical matter if the
system is running low on memory then either the program will soon exit (whether
intentionally or a "controlled flight into terrain") and the resource will be
released by the OS, or the GC will manage to run the finaliser, and -- again --
the resource will be released.

I can't see a large enough gap between those two most-probable conditions to
justify using awkward code.

-- chris
 
T

Twisted

Chris said:
I can't see a large enough gap between those two most-probable conditions to
justify using awkward code.

THANK you.

While we're on the topic of memory management, I mentioned earlier that
I have a Java app that slowly degrades and eventually throws OOME and
dies (with some attempts at recovery), despite the persistent state not
having an upward size trend and despite avoiding orphaning threads
(i.e. threads get suspended, or winding up in try { while (true)
sleep(1000); } catch (InterruptedException e) until the cows come home,
or whatever).

Well, I added a low-memory check consisting of checking first if
runtime.maxMemory() is getting within 20M or so of -Xmx and if so,
whether runtime.freeMemory() is less than 2M and attempting a graceful
automatic exit and restart if so.

It's now running stably, but it's chewing most of the CPU on the box it
runs on.

My guess is that just probing the memory state every few hundred ms is
causing it to gc more often, using more CPU but actually keeping up
with the object churn. Interesting. Especially as there are (still) no
explicit calls anywhere to System.gc().
 
T

Thomas Hawtin

Twisted said:
If the BufferedOutputStream throws an OOME, then it will leak a little
dribble after it's evidently already lost an ocean. Put it in
perspective will you? :)

I don't consider file handles as "dribbles".

Leaking a bit of memory is bad. Leaking actual resources is criminal.

If I've OOMEd I'm very low on memory. It now becomes much more important
that I don't leak memory, not less.

You should also note you are offloading IO work onto the (usually)
single finaliser thread. It doesn't matter what priority it runs at if
all finalisers are waiting on external events.

Tom Hawtin
 
T

Twisted

Thomas said:
You should also note you are offloading IO work onto the (usually)
single finaliser thread. It doesn't matter what priority it runs at if
all finalisers are waiting on external events.

Releasing a file handle can block?
 
T

Thomas Hawtin

Twisted said:
Releasing a file handle can block?

Presumably it depends upon implementation. For a FileOutputStream
presumably there can still be work there to do that may fail. I guess
it's a "quality of implementation" matter as to whether such failures
are reported. I know not which implementations fall into which camp.

There may also be other classes with finalisers.

Tom Hawtin
 
T

Twisted

Thomas said:
Presumably it depends upon implementation. For a FileOutputStream
presumably there can still be work there to do that may fail. I guess
it's a "quality of implementation" matter as to whether such failures
are reported. I know not which implementations fall into which camp.

For an output stream I can see it, if there's a writeback cache to
flush to disk as part of the close and you want to report any failure
there back to the caller (e.g. by throwing an IOException).

I don't see closing an input stream (as long as it was *purely* an
input stream) blocking though, in any sensible implementation. :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,263
Messages
2,571,062
Members
48,769
Latest member
Clifft

Latest Threads

Top