does python have useless destructors?

  • Thread starter Michael P. Soulier
  • Start date
M

Marcin 'Qrczak' Kowalczyk

So are we to take it that efficiency considerations are a serious
impediment to a potentially valuable safety feature?

Yes, if the efficiency difference is big enough. I don't know if it's big
enough; perhaps it is.

But there is another reason: adding refcounting would complicate
interfacing between Python and the host language, and would introduce
the possibility of bugs in refcounting.

The correct way is to provide a good syntax for explicit bracketing of the
lifetime of an object. try...finally works but is ugly.
 
D

Donn Cave

Michael Hudson said:
OK. I claim you can't really have that, and that you don't really
need it anyway. The idea behind PEP 310 is to acheive the ends of
RAII in C++ by different means.

What else do you want to use __del__ methods for?

That would be my question. That's what __del__ methods _are_
used for. C Python users _do_ really have that, it just takes
more care than we like if it needs to be 100% reliable.

Maybe I don't need it, maybe 310 gives me RAII and that's what
I really need. I don't know, but again, 310 doesn't give me
anything of consequence in terms of software architecture.
Anything you can write with it, you can write without it by
simple substitution of try/finally. That's not true of the
finalization that I have now in C Python, there's no effective
substitute for that.

Donn Cave, (e-mail address removed)
 
M

Marcin 'Qrczak' Kowalczyk

Anything you can write with it, you can write without it by
simple substitution of try/finally.

This is OK, because try/finally does provide the necessary functionality.
It only has ugly syntax for this purpose.

Well, PEP 310 syntax is not that great either - it requires introducing an
indentation level - but it seems Python syntax is incompatible with other
choices.
That's not true of the finalization that I have now in C Python, there's
no effective substitute for that.

It doesn't matter because it's impossible to implement well on other
runtimes. It would require reimplementation of the whole GC functionality,
including a periodic GC which breaks cycles, ignoring the GC available
in the runtime.

It was simply a bad idea to rely on finalization to free precious external
resources in the first place.
 
D

Donn Cave

Marcin 'Qrczak' Kowalczyk said:
This is OK, because try/finally does provide the necessary functionality.
It only has ugly syntax for this purpose.

Well, PEP 310 syntax is not that great either - it requires introducing an
indentation level - but it seems Python syntax is incompatible with other
choices.


It doesn't matter because it's impossible to implement well on other
runtimes. It would require reimplementation of the whole GC functionality,
including a periodic GC which breaks cycles, ignoring the GC available
in the runtime.

It was simply a bad idea to rely on finalization to free precious external
resources in the first place.

After what must be at least your third repetition of this point,
you must be wondering if A) people aren't capable of understanding
it, or B) they don't accept some implicit assumption. Could it be
that it isn't as obvious to everyone that Python should limit itself
to some least common denominator of all languages' runtimes?

In the current situation, we are stuck with a feature that has some
reliability issues in C Python, and evidently doesn't work at all in
Java Python, but that _does work_ in C Python and is deeply useful,
not just a convenience or an aid to readability but a fundamental
difference in how programs can be structured. There are these two
halves to this issue, and you can't make that other half go away by
pretending that it was all just `simply a bad idea'

Donn Cave, (e-mail address removed)
 
A

Aahz

[email protected] (Aahz) wrote in message news: said:
Not really. What you're doing is what I'd call "virtual stack" by
virtue of the fact that the heap objects are being managed by stack
objects.

Having read this through a second time, I'm not sure that you
understood the C++ code I posted. So here is an equivalent in Python:

class File:
def __init__(self, name):
self.fh = open(name, "r")
def __del__(self):
self.fh.close()

file_list = []
file_list.append(File(file_to_compile))
while len(file_list):
f = file_list[len(file_list)-1]
t = Token(f)
if t == Token.EOF:
file_list.pop()
else:
parse(t)


No stack objects in sight, yet this code is semantically equivalent to
the C++ code.

Not really. Problem is that there's nothing to prevent people from
passing File.fh outside the loop -- and that's standard Python coding
technique! For that matter, there's nothing preventing a File()
instance from being passed around. The fact that you've created an
idiom that you want to behave like a similar C idiom has nothing to do
with the way Python actually works.
 
D

David Turner

[email protected] (Aahz) wrote in message news: said:
In fact, the RAII idiom is quite commonly used with heap-allocated
objects. All that is required is a clear trail of ownership, which is
generally not that difficult to achieve.

Not really. What you're doing is what I'd call "virtual stack" by
virtue of the fact that the heap objects are being managed by stack
objects.

Having read this through a second time, I'm not sure that you
understood the C++ code I posted. So here is an equivalent in Python:
[snip]

No stack objects in sight, yet this code is semantically equivalent to
the C++ code.

Not really. Problem is that there's nothing to prevent people from
passing File.fh outside the loop -- and that's standard Python coding
technique! For that matter, there's nothing preventing a File()
instance from being passed around. The fact that you've created an
idiom that you want to behave like a similar C idiom has nothing to do
with the way Python actually works.

You can do exactly the same thing in the C++ version, and regularly
do. What's your point?

Regards
David Turner
 
D

David Turner

Isaac To said:
David> So are we to take it that efficiency considerations are a serious
David> impediment to a potentially valuable safety feature?
[snip]
that reference counting is used. But for any portable code, one must be
more careful---exactly our current situation. We need no more discussion
about how to collect garbages. On the other hand, PEP 310 discusses
something completely unrelated, which still should attract some eyeballs.

We have three issues here - garbage collection (undoubtedly more
efficient than reference counting), RAII (undoubtedly safer than
try/finally), and PEP 310 (the "with" form).

I've already pointed out the problems with "with" (it's the same ball
game as try/finally, and suffers from the same limitations). So my
opinion is that the RAII idiom should somehow be enabled in Python.
The crucial difference between "with" and RAII here is that "with"
requires intervention by the end-user of the object, whereas the RAII
idiom does not.

The most obvious way in which RAII can be enabled is by introducing
defined reference-counting semantics for objects with __del__. I
can't think of another way of doing it off-hand, but there must be
others (if reference counting has unacceptably high overhead).

One possibility is to introduce scoped objects, analogous to auto or
stack-allocated objects. It would be illegal to pass a reference to
these outside of a given scope, and they would have __del__ (or
another method) called when the scope ends. This is slightly cheaper
than reference counting, but of course it imposes very real
limitations on the usefulness of the object. However, this pattern
might still have its uses.

Any other suggestions would be most welcome...

Regards
David Turner
 
M

Marcin 'Qrczak' Kowalczyk

The most obvious way in which RAII can be enabled is by introducing
defined reference-counting semantics for objects with __del__. I
can't think of another way of doing it off-hand, but there must be
others (if reference counting has unacceptably high overhead).

The variables which hold these objects must be distinguished.
Not just the objects themselves.
One possibility is to introduce scoped objects, analogous to auto or
stack-allocated objects. It would be illegal to pass a reference to
these outside of a given scope, and they would have __del__ (or
another method) called when the scope ends.

What would happen when you store a reference to such object in another
object? Depending on the answer, it's either severely limited (you can
introduce no abstractions which deal with files somewhere inside), or as
inefficient as reference counting.

The only way is to somehow distinguish variables for which objects they
point to should be finalized, from plain variables. Not objects but
variables. try...finally and with are examples of that.
 
A

Aahz

[email protected] (Aahz) wrote in message news: said:
(e-mail address removed) (Aahz) wrote in message In article <[email protected]>,

In fact, the RAII idiom is quite commonly used with heap-allocated
objects. All that is required is a clear trail of ownership, which is
generally not that difficult to achieve.

Not really. What you're doing is what I'd call "virtual stack" by
virtue of the fact that the heap objects are being managed by stack
objects.

Having read this through a second time, I'm not sure that you
understood the C++ code I posted. So here is an equivalent in Python:
[snip]

No stack objects in sight, yet this code is semantically equivalent to
the C++ code.

Not really. Problem is that there's nothing to prevent people from
passing File.fh outside the loop -- and that's standard Python coding
technique! For that matter, there's nothing preventing a File()
instance from being passed around. The fact that you've created an
idiom that you want to behave like a similar C idiom has nothing to do
with the way Python actually works.

You can do exactly the same thing in the C++ version, and regularly
do. What's your point?

And how does your destructor work when you do that?
 
M

Michael Hudson

Donn Cave said:
That would be my question. That's what __del__ methods _are_
used for. C Python users _do_ really have that, it just takes
more care than we like if it needs to be 100% reliable.

Now you've lost me.

What did the last __del__ method you wrote do?
Maybe I don't need it, maybe 310 gives me RAII and that's what
I really need. I don't know, but again, 310 doesn't give me
anything of consequence in terms of software architecture.

Indeed. It's sole aim is to make doing the right thing a bit less
painful. One of the nice things about e.g. Python's exception
handling is that it makes doing the right thing *easier* than doing
the wrong thing -- no forgetting to check return codes and so on. I
don't see a way of making using a 'with:' block *easier* than not, but
I contend it's an improvement.

Cheers,
mwh
 
M

Michael Hudson

(e-mail address removed) (David Turner) writes:

[snippety]
We have three issues here - garbage collection (undoubtedly more
efficient than reference counting), RAII (undoubtedly safer than
try/finally), and PEP 310 (the "with" form).

I fail to buy either of your 'undoubtedly's. You may be correct in
both of them, but they both require justification...

(Besides, saying garbage collection is more efficient than reference
counting is like saying fruit is more tasty than apples).
I've already pointed out the problems with "with" (it's the same ball
game as try/finally, and suffers from the same limitations).

I must have missed that. Message-ID?
So my opinion is that the RAII idiom should somehow be enabled in
Python. The crucial difference between "with" and RAII here is that
"with" requires intervention by the end-user of the object, whereas
the RAII idiom does not.

Well, in C++ the distincton the creator of the object makes is that
the RAIIed thing is an automatic variable, isn't it?
One possibility is to introduce scoped objects, analogous to auto or
stack-allocated objects. It would be illegal

How do you tell that this has/is going to happen? If you can't pass
one of these objects to a subroutine, the idea is nearly useless, I'd
have thought.
to pass a reference to these outside of a given scope, and they
would have __del__ (or another method) called when the scope ends.
This is slightly cheaper than reference counting, but of course it
imposes very real limitations on the usefulness of the object.

Ah :)

Cheers,
mwh

--
> So what does "abc" / "ab" equal?
cheese
-- Steve Holden defends obscure semantics on comp.lang.python
 
I

Isaac To

Marcin> What would happen when you store a reference to such object in
Marcin> another object? Depending on the answer, it's either severely
Marcin> limited (you can introduce no abstractions which deal with files
Marcin> somewhere inside), or as inefficient as reference counting.

Marcin> The only way is to somehow distinguish variables for which
Marcin> objects they point to should be finalized, from plain
Marcin> variables. Not objects but variables. try...finally and with are
Marcin> examples of that.

If I understand it correctly, PEP 310 doesn't deal with that use case at
all. In my opinion the language shouldn't try it. Just let the program
fail. Treat programmers as grown men who will not try to break things on
purpose. You can always close a file and then continue to read it. All the
system do is to give you an exception. If we can accept this, why we worry
that much if the construct close the file too early if a reference remains
when the function close? An "auto" or "function scoped" object is simply a
means to simplify functions, i.e., let you write

auto myvar = new file
line = myvar.readline()
...
# ignore the difficulty of closing it

rather than

myvar = None
try:
myvar = new file
line = myvar.readline()
...
finally:
if myvar:
myvar.close()

The only purpose is to simplify the program. We shouldn't (and actually
won't be able to) add extra responsibility to it, e.g., make sure it is
reference counted so that the file is closed only when no more reference
remains. If the second program above doesn't do that, we shouldn't expect
it either in the first program. Just throw an exception whenever somebody
wants to exceed the capability of the construct.

Regards,
Isaac.
 
R

Roy Smith

Maybe I don't need it, maybe 310 gives me RAII and that's what
I just read PEP-310, and there's something I don't understand. It says:
Note that this makes using an object that does not have an
__exit__() method a fail-fast error.

What does "fail-fast" mean?
 
M

Michael Hudson

Roy Smith said:
I just read PEP-310, and there's something I don't understand. It says:


What does "fail-fast" mean?

Hmm, that should be clarified. Basically the idea is that you'll get
an attribute error if you try to use an object in a with: clause that
isn't meant to be used there (as opposed to your program doing
something other than what you expect).

Cheers,
mwh
 
D

Donn Cave

Michael Hudson said:
Now you've lost me.

What did the last __del__ method you wrote do?

I can't specifically remember the last one, but scrounging
around a bit I see an application where I have a __del__
method that releases a kernel semaphore, and a dealloc function
that quits its kernel thread and adds a pending call to delete
its Python thread state.

I wonder if I have created confusion by not getting the
antecedents right, or something like that, when responding
to your question. All I'm saying is that we do have finalization
in C Python, and that's what __del__ is about as far as I can
tell. I don't know what RAII means (or doesn't mean) well
enough to make any difference (now I know what the acronym is,
but what do automatic variables have to do with Python?)

Donn Cave, (e-mail address removed)
 
R

Roger Binns

David said:
The most obvious way in which RAII can be enabled is by introducing
defined reference-counting semantics for objects with __del__. I
can't think of another way of doing it off-hand, but there must be
others (if reference counting has unacceptably high overhead).

I would be quite happy to get an exception or equivalent when
a situation arises that the GC can't handle. I would consider
it a bug in my code or design and something I should fix, and
consequently would want to know about it. Other people's opinion
may differ (tm).

Roger
 
D

David Turner

And how does your destructor work when you do that?

Just fine, thank you. It does exactly what I expect it to - runs when
the last reference disappears.

What did you expect?

Regards
David Turner
 
D

David Turner

Michael Hudson said:
(e-mail address removed) (David Turner) writes:

[snippety]
We have three issues here - garbage collection (undoubtedly more
efficient than reference counting), RAII (undoubtedly safer than
try/finally), and PEP 310 (the "with" form).

I fail to buy either of your 'undoubtedly's. You may be correct in
both of them, but they both require justification...

(Besides, saying garbage collection is more efficient than reference
counting is like saying fruit is more tasty than apples).
I've already pointed out the problems with "with" (it's the same ball
game as try/finally, and suffers from the same limitations).

I must have missed that. Message-ID?

For your convenience, I've summarized my arguments (briefly) in a
posting here:

http://dkturner.blogspot.com/2004/06/garbage-collection-raii-and-using.html

(Something I didn't mention: reference counting involves atomic
increments and decrements which can be expensive).

Well, in C++ the distincton the creator of the object makes is that
the RAIIed thing is an automatic variable, isn't it?

Not necessarily. You can usefully have a shared (reference-counted)
pointer to an RAII object. In fact, this is more generally useful
than an automatic variable, if more expensive (it's a cover for the
auto variable concept). But I do agree that C++'s auto variables led
directly to the development of RAII, which might otherwise have been
missed.

Regards
David Turner
 
C

Christos TZOTZIOY Georgiou

On 14 Jun 2004 00:00:39 -0700, rumours say that (e-mail address removed)
(David Turner) might have written:

[snip text by Isaac To]
You don't have to specify the finalization time in order to make the
destructors work. Destruction and finalization are *different
things*.
The D programming language somehow contrives to have both garbage
collection and working destructors. So why can't Python?

CPython for the time being reliably calls __del__ methods upon the
object's end of life, *unless* the object is part of a reference cycle.
There are ways to break these cycles.

In reply to your question, "why can't Python?", I say: "There are no
stack-allocated objects in Python". Counter questions: does D have
heap-allocated objects? If yes, how are they finalised before being
destructed? And what happens to references to these objects after their
destruction?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,053
Latest member
BrodieSola

Latest Threads

Top