finalize() not guaranteed to be called -- ever

P

Paul J. Lucas

Dale King said:
Therefore for heap based objects there is little difference between C++ and
Java.

The thing you're missing is that in C++ you can use things like
auto_ptr<T> or create something more sophisticated. In Java,
you pretty much turn over everything to the GC and hope for the
best.
And when will [hypothetical Java destructors] be called? Is it OK if they are
never called until the program shuts down? If you want it to be guaranteed to
be some time sooner than that you'll have to explain how the JVM can
guarantee that.

Why can't a bit of reference-counting be used in addition to
the mark-and-sweep algorithm for implementing GC? When the
number of strong references decrements to zero, the destructor
is called.

- Paul
 
C

Chris Uppal

Paul said:
Why can't a bit of reference-counting be used in addition to
the mark-and-sweep algorithm for implementing GC? When the
number of strong references decrements to zero, the destructor
is called.

How would you handle reference cycles without either requiring a full GC on
every method exit, or making the application's behaviour dependent on obscure
and fragile properties of the shape of the object network ?

-- chris
 
C

Chris Uppal

Dale said:
I don't think it is mileading at all. New objects are in fact allocated
contiguously in stack-like fashion in the object nursery. I fail to see
anything that is not factually true in that statement.

There's nothing "stack-like" about allocating objects linearly. All[*] methods
of placing allocations will, if the space they are looking at happens to be
empty, place them in order. You might just as well claim that it was
"first-fit like"....

It's the /deallocation/ that gives a stack its particular flavour, and using
the expression "stack like" when the deallocation does not follow FIFO
principles is just misleading.

(I would, of course, have no objection to a sentence like "the implementation
uses a nursery space <insert description> which provides an approximation to
the benefits of stack-based allocation".)

But, to quote Humpty Dumpty again:

"Impenetrability! That's what /I/ say!"

-- chris

([*] subject to the obvious caveats...)
 
C

Chris Uppal

Chris said:
Of course, there are two major problems here. First, no mention is made
of the complementary process of removing data from the structure, which
differs radically.

<grim smile>
Yes, exactly...
</grim smile>

-- chris
 
R

Raymond DeCampo

Dale said:
And when I said that you have to manually delete objects above you told
me I was wrong!

Because you did not qualify the statement, leading me to believe you
were referring to all cases. You have since explained to me what you
meant. I think this part of the conversation has past usefulness.

I'd like for this to be a productive discussion and not two people
arguing over semantics, definitions and who said what and what it meant.
And it is this feature you are asking for, not destructors. Destructors
are meaningless in Java without this feature.

This appears to be the root of the disagreement. I think we agree on
the rest, especially the factual things about Java and C++ we have been
shouting past each other, so to speak.

I do not agree that destructors are only worthwhile only if one has
local variables. I would not even consider that the case in C++.

But I should focus on Java, since that is the main consideration here.
If Java had destructors that were guaranteed to run when the objects
were deallocated, you would have destructors that operate on heap
objects automatically. This would allow guaranteed clean up of external
resources.

The fact that some libraries do implement finalize() indicates this
would not be useless.

I notice that you have used the term "meaningless" where I have been
arguing to the "usefulness". Perhaps there is some subtlety here I am
overlooking?

And when will they be called? Is it OK if they are never called until
the program shuts down? If you want it to be guaranteed to be some time
sooner than that you'll have to explain how the JVM can guarantee that.

I never said anything to imply that the destructor must run in a timely
fashion (whatever that means). The destructor would occur when an
object is garbage collected. I am beginning to realize that this
question of when is perhaps the reason you are focused on the local
variable vs heap objects point of view.

Finally, let me say that I am not advocating adding destructors to the
Java language. It seems clear that the designers of the language
considered them in the finalize() method and rejected them. I do not
have the hubris to think that I know better than they do, nor am I naive
enough not to realize that adding something like this is not fraught
with potential gotchas. I am only arguing against the premise that
destructors would be useless (or meaningless) in Java.

Ray
 
R

Raymond DeCampo

Chris said:
Dale is referring to C++, and explaining very succinctly why "objects"
placed on the stack in C++ are NOT actually objects by the standards of
typical OO theory. It's a point that's lost on many C++ developers. It
is also the justification for not including this possibility in Java.




Let's start by agreeing on common definitions. A destructor a block of
code that is called automatically when an object is deallocated. Since
there's never any way at all to guarantee that an object will be
deallocated in Java, destructors per se wouldn't do any good.

You suggest that the JVM can just guarantee that finalize() is called,
without changing the memory model. The question is: WHEN would the JVM
guarantee to do that?

Would it be called whenever ANY reference to an object goes out of
scope? That would be very annoying, and would entirely prevent such
basic and good design techniques as passing database connections to
other methods or storing them in data structures. They would be getting
inadvertently closed all the time, and oh what a pain it would be.

Would it be called whenever the last live reference to an object goes
out of scope? I hope not, because that would prevent the use of any
kind of deferred garbage collection algorithms for finalizable objects.
You'd be stuck with all JVMs doing weak/strong ref-counting for ANY
reference to ANY supertype of ANY type that defines a finalize() method.
Performance would tank, and we'd be in serious trouble.

Or perhaps the VM should read your mind and decide when to call the
destructor?




Dale said it doesn't buy you anything unless you also change the memory
model to allow locally-scoped "objects" (which, of course, are not true
objects).

I am not familiar with this idea that locally-scoped objects are not
true objects. Can you provide some references?
As it turns out, Dale hit it right on the head on this one.
I'm anxiously awaiting your answer to the magic question of "when", as
explained above.

I find it interesting that you are waiting so anxiously, when the answer
to your question was contained in the definition of destructor that you
gave.

As for objects that were not deallocated during the normal operation of
the program, the destructors may run upon JVM exit.

Finally, if you cannot conduct this discussion in a more respectful
tone, I will not engage you any longer.

Ray
 
C

Chris Uppal

Raymond said:
As for objects that were not deallocated during the normal operation of
the program, the destructors may run upon JVM exit.

I'm tying to imagine under what circumstances this would be a benefit.

I can see why Paul Lucas might want "destructors" that are tied in some
predictable way to leaving method scopes (regardless of whether that is
implementable), but I can't come up with a convincing usage scenario where it's
useful to guarantee that finalize() is called, without also guaranteeing (or
putting a worst-case limit on) /when/ it'll be called.

Consider the case of an file that is open for writing. In the absence of a
guarantee that the file will be close()-d (or at least flush()-ed) by
finalisation when the program exits, it is necessary for the programmer to call
close() (or at least flush()) explicitly. Fair enough as far as it goes. But
open files are not really suitable for management by finalisation since the
resource they represent is too scarce. While having a guarantee that /this/
file will be close()ed (or at least flush()ed) will give assurance that /this/
file will not be left in an inconsistent state when the program exits, it does
not guarantee that the open handle will not hang around long enough to cause
the application to malfunction before then.

Maybe your point is only that if this guarantee existed then finalisation would
be useful in a wider range of cases than it is at present (while not claiming
that it would be appropriate for all forms of cleanup). If so then I sort of
agree, but I still suspect that the target cases would be better (cleaner, more
reliable, just as convenient) handled by some sort of exit hook.

-- chris
 
C

Chris Smith

Raymond DeCampo said:
I am not familiar with this idea that locally-scoped objects are not
true objects. Can you provide some references?

Objects in mainstream OO theory must have three characteristics:
encapsulation, inheritance, and polymporphism; and it must own its
identity, behavior, and state. An object that is allocated into a stack
frame lacks the full sense of polymorphism since it must be of a
specific concrete type. IMO, a clearer way of putting it is that it
fails to own its identity.

This is referring to objects that are immediately dependent on other
structures (such as fields, array members, or local variables). Java
doesn't do this at all, so Java's locally scoped REFERENCES don't have
these characteristics.
I find it interesting that you are waiting so anxiously, when the answer
to your question was contained in the definition of destructor that you
gave.

As for objects that were not deallocated during the normal operation of
the program, the destructors may run upon JVM exit.

So basically, what you want is System.runFinalizersOnExit(boolean)?
It's already there, although deprecated because of its serious
disadvantages. See the JavaDocs for a brief description of the problems
with that approach.
Finally, if you cannot conduct this discussion in a more respectful
tone, I will not engage you any longer.

I fail to see where I was disrespectful, and apologize if I come across
that way. I'm afraid you may be confusing disagreement with disrespect.

--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.

Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
 
R

Raymond DeCampo

Chris said:
Raymond DeCampo wrote:




I'm tying to imagine under what circumstances this would be a benefit.

I can see why Paul Lucas might want "destructors" that are tied in some
predictable way to leaving method scopes (regardless of whether that is
implementable), but I can't come up with a convincing usage scenario where it's
useful to guarantee that finalize() is called, without also guaranteeing (or
putting a worst-case limit on) /when/ it'll be called.

Consider the case of an file that is open for writing. In the absence of a
guarantee that the file will be close()-d (or at least flush()-ed) by
finalisation when the program exits, it is necessary for the programmer to call
close() (or at least flush()) explicitly. Fair enough as far as it goes. But
open files are not really suitable for management by finalisation since the
resource they represent is too scarce. While having a guarantee that /this/
file will be close()ed (or at least flush()ed) will give assurance that /this/
file will not be left in an inconsistent state when the program exits, it does
not guarantee that the open handle will not hang around long enough to cause
the application to malfunction before then.

Maybe your point is only that if this guarantee existed then finalisation would
be useful in a wider range of cases than it is at present (while not claiming
that it would be appropriate for all forms of cleanup). If so then I sort of
agree, but I still suspect that the target cases would be better (cleaner, more
reliable, just as convenient) handled by some sort of exit hook.

Your last paragraph pretty much sums it up. I would add that the
destructors would then become an exit hook (essentially), but would be
more object-oriented as they are tied to the object itself and one would
not have to manually register an exit hook. Try/finally clauses would
remain the best way to clean up after yourself.

Ray
 
R

Raymond DeCampo

Chris said:
Objects in mainstream OO theory must have three characteristics:
encapsulation, inheritance, and polymporphism; and it must own its
identity, behavior, and state. An object that is allocated into a stack
frame lacks the full sense of polymorphism since it must be of a
specific concrete type. IMO, a clearer way of putting it is that it
fails to own its identity.

That is an interesting point of view, I'm not sure I completely agree
with it. I suppose I think that the issue is really with the specific
language's representation of the object rather than the object itself.
I'm afraid I am not up enough on my C++ to articulate it better.
This is referring to objects that are immediately dependent on other
structures (such as fields, array members, or local variables). Java
doesn't do this at all, so Java's locally scoped REFERENCES don't have
these characteristics.




So basically, what you want is System.runFinalizersOnExit(boolean)?
It's already there, although deprecated because of its serious
disadvantages. See the JavaDocs for a brief description of the problems
with that approach.

I've noted in another thread that there are non-obvious difficulties
with the approach and I am not necessarily advocating that Java be
changed to accommodate it.

The difference between System.runFinalizersOnExit() and a fully endorsed
and implemented destructor scheme is that many of the system classes do
not have finalize() defined.
I fail to see where I was disrespectful, and apologize if I come across
that way. I'm afraid you may be confusing disagreement with disrespect.

Since you did not intend to be disrespectful, I'll withdraw my comments
in that regard.

Ray
 
P

Paul J. Lucas

Chris Uppal said:
Consider the case of an file that is open for writing. In the absence of a
guarantee that the file will be close()-d (or at least flush()-ed) by
finalisation when the program exits, it is necessary for the programmer to
call close() (or at least flush()) explicitly.

In Java, yes. Actually, you have to go to more trouble. You
need to do something like:

try {
FileInputStream in = new FileInputStream( inFile );
FileOutputStream out = new FileOutputStream( outFile );
// ... copy in to out
}
finally {
try {
out.close();
}
finally {
in.close();
}
}

You need the double try/finally because close() itself can throw
an exception. The above code is verbose, tedious to write,
easily forgotten, and easy to get wrong.

In C++, the code reduces to:

istream in( inFile );
ostream out( outFile );
// ... copy in to out

because in and out are (1) allocated on the stack and (2) have
destructors. No matter how the function containing the above
code exists, be it by "falling out the bottom," an explicit
return, throwing an exception, or calling exit(), both files
will be closed properly.

- Paul
 
P

Paul J. Lucas

Chris Smith said:
An object that is allocated into a stack frame lacks the full sense of
polymorphism since it must be of a specific concrete type.

But day-to-day programming doesn't need "pure OO" so nobody
*cares* that the above is true in C++. Not every object needs
polymorphism. An excellent example is a File class. It
represents a concrete thing: a file on a disk. You don't need
to subclass it. You don't need polymorphism from it.

- Paul
 
T

Thomas Hawtin

Paul said:
But day-to-day programming doesn't need "pure OO" so nobody
*cares* that the above is true in C++. Not every object needs
polymorphism. An excellent example is a File class. It
represents a concrete thing: a file on a disk. You don't need
to subclass it. You don't need polymorphism from it.

Funnily enough File is subclassed even within the JRE. Perhaps the
design isn't tasteful to some, but it's there.

In C++ it appears that you generally forced into 'compile-time
polymorphism' with the 'latent typing' of templates.

Tom Hawtin
 
P

Paul J. Lucas

Chris Uppal said:
How would you handle reference cycles without either requiring a full GC on
every method exit, or making the application's behaviour dependent on obscure
and fragile properties of the shape of the object network ?

The problem with the "Java way" is that it it tries too hard to
be some sort of "pure" GC'd language trying to protect
programmers from themselves when the real problems or real
programmers don't need sure "pureness" or guaranteed safety
100% of the time.[1]

If Java had some sort of reference-counting feature combined
with GC, the documetation could clearly state something like:
"Don't create cycles. Just don't. Really. If you do, you'll
just have to wait for the regular mark-and-sweep GC to clean up
your mess," and have this in *addition* to the current GC,
perhaps by the introduction of a CountedReference class.

The problem with the current GC is that it's "one size fits all"
and you're stuck with it. It's also very difficult to tweak.[2]

In C++, you can devise an implement any clever object management
mechanism you please: overloading operator new(), memory pools,
even GC if you want.

- Paul

[1] Java doesn't guarantee your safety in many other areas. For
example, try writing an equals() without a hashCode() (or vice
versa) and see what kind of trouble you get into. So why should
Java be so fascist about memory safety? It's seems heavily
lop-sided as if memory problems were the obly problems with
programs.

[2] ReferenceQueue is, IMHO, totally broken since the reference
you get back has already been cleared by the time you get it.
What you really want is the reference to the object for doing
"one last thing" with it before it's reclaimed by the GC. But
the Java implementors were afraid you'd do something like keep
a strong reference around to the object thereby foiling the
reclamation. IMHO, you should be allowed to do that, but if
you do, the consequences are on your own head. The rest of us
who could use the feature correctly would benefit.
 
C

Chris Smith

Paul J. Lucas said:
In Java, yes. Actually, you have to go to more trouble. You
need to do something like:

try {
FileInputStream in = new FileInputStream( inFile );
FileOutputStream out = new FileOutputStream( outFile );
// ... copy in to out
}
finally {
try {
out.close();
}
finally {
in.close();
}
}

Cleaner code looks like this:

FileInputStream in = new FileInputStream(inFile);

try
{
FileOutputStream out = new FileOutputStream(outFile);

try
{
// copy in to out
}
finally
{
out.close();
}
}
finally
{
in.close();
}

This provides a better separation of concerns than the original... and
more importantly, it compiles.

Point taken, though. The code is definitely much simpler in C++,
because of the existence of stack-allocated data structures (sometimes
called "objects") with destructors. The trade-off is a lack of memory
safety. See my next reply for some reasoning there.

--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.

Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
 
C

Chris Smith

The problem with the "Java way" is that it it tries too hard to
be some sort of "pure" GC'd language trying to protect
programmers from themselves when the real problems or real
programmers don't need sure "pureness" or guaranteed safety
100% of the time.[1]

[1] Java doesn't guarantee your safety in many other areas. For
example, try writing an equals() without a hashCode() (or vice
versa) and see what kind of trouble you get into. So why should
Java be so fascist about memory safety? It's seems heavily
lop-sided as if memory problems were the obly problems with
programs.

Although programmer-safety is nice, it isn't the main goal behind Java's
memory model. The more important concern is security-safety. To
understand this, you have to see that Java tries something that is
somewhat unique for a mainstream programming languages (but common for
scripting languages). It offers a security system above and beyond that
provided by the operating system. That security system guarantees that
it is impossible to perform certain actions such as reading and writing
local files, even when the OS would allow it.

To provide this guarantee, Java needs to GUARANTEE that there is NEVER
an instance of completely undefined behavior in any application. There
is partially undefined behavior, such as what System.getProperty returns
when passed "os.name"... but there remains a requirement that the return
value will either be null or point to a valid String. If it were ever
possible to get a pointer to some random non-allocated point in space,
then then security guarantee would be thwarted.

In other words, the concern isn't over whether a benevolent programmer
will suffer from accidental memory corruption... it's whether a
malicious programmer can intentionally modify memory to which they
haven't been intentionally given a pointer. If the latter were
possible, then it becomes a feasible jump from there to figuring out a
way to modify the state of an instance of SecurityManager and force it
to let you trash someone's hard drive.

That's the important difference between memory issues and failing to
override hashCode. The latter generates incorrect BUT DEFINED behavior.
It does not pose a security risk.
If Java had some sort of reference-counting feature combined
with GC, the documetation could clearly state something like:
"Don't create cycles. Just don't. Really. If you do, you'll
just have to wait for the regular mark-and-sweep GC to clean up
your mess," and have this in *addition* to the current GC,
perhaps by the introduction of a CountedReference class.

Here's the problem, though. First, what classes have destructors?
Let's say there are a hundred of them. Next, what kinds of references
can refer to them? Any reference to a superclass or superinterface
MIGHT refer to a class with a destructor. Because class loading in Java
is dynamic, really any reference to any non-final type MIGHT refer to a
class with a destructor. Remember that reference-counting has to happen
ALL the time or NONE of the time for any given object. Half-way
reference counting is otherwise known as memory corruption.

So where does reference-counting have to occur? Ultimately, in AT LEAST
80% of all objects in the VM... and probably a lot closer to 95 to 100%.
Furthermore, some of the most common reference types (such as Object)
are in the list that must be reference-counted. Now, garbage collection
can be made to perform acceptably, but it's a tough job. Now you are
introducing a SECOND redundant form of garbage collection into the
system, and it is one that exhibits horrible performance characteristics
compared to more advanced algorithms like copying collection. This is
looking like a pretty dismal future for Java performance.

There's a good reason why deferred GC is commonly used in the first
place, and it's not cyclical references (which, incidentally, is
basically a solved problem, albeit with some overhead for the solution).

--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.

Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
 
T

Thomas Hawtin

Paul said:
In Java, yes. Actually, you have to go to more trouble. You
need to do something like:

try {
FileInputStream in = new FileInputStream( inFile );
FileOutputStream out = new FileOutputStream( outFile );
// ... copy in to out
}
finally {
try {
out.close();
}
finally {
in.close();
}
}

The doesn't imply you must have destructors to write this unusual code.
For instance using an adaption of a C# idea:

using InputStream in = new FileInputStream(inFile);
using OutputStream out = new FileOutputStream(outFile);
You need the double try/finally because close() itself can throw
an exception. The above code is verbose, tedious to write,
easily forgotten, and easy to get wrong.

In C++, the code reduces to:

istream in( inFile );
ostream out( outFile );
// ... copy in to out

Only if there's a an exception from closing the output stream, the
destructor will remain silent. To do otherwise would call an abort.

Tom Hawtin
 
R

Raymond DeCampo

Paul said:
But day-to-day programming doesn't need "pure OO" so nobody
*cares* that the above is true in C++. Not every object needs
polymorphism. An excellent example is a File class. It
represents a concrete thing: a file on a disk. You don't need
to subclass it. You don't need polymorphism from it.

When you think about it, the file descriptor from C is actually one of
the earliest concepts to exhibit polymorphic behavior. In the sense
that a file descriptor could represent any number of underlying stream
types from files to pipes to sockets, etc. The program would invoke
read and write functions and the underlying mechanisms would do the
right thing.

Just an ironic observation.

Ray
 
P

Paul J. Lucas

Thomas Hawtin said:
Paul J. Lucas wrote:

Only if there's a an exception from closing the output stream, the
destructor will remain silent. To do otherwise would call an abort.

No, the function can return by *any* means and the destructors
*will* be called.

- Paul
 
P

Paul J. Lucas

Thomas Hawtin said:
Funnily enough File is subclassed even within the JRE. Perhaps the
design isn't tasteful to some, but it's there.

Streams are also subclassed in the C++ implementation, but the
implementation is irrelevant. You, as an end-user, don't
generally need to subclass File.

- Paul
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,020
Latest member
GenesisGai

Latest Threads

Top