Python's "only one way to do it" philosophy isn't good?

A

Andy Freeman

Something like the current situation with Python web frameworks ;)

Actually, no. For python, the most reasonable macro scope would be
the file, so different files in the same application could easily use
conflicting macros without any problems.
 
A

Andy Freeman

Right, more scattered special purpose kludges instead of a powerful
uniform interface.

Huh? The interface could continue to be (map ...).

Python's for statement relies on the fact that python is mostly object
oriented and many of the predefined types have an iterator interface.
Lisp lists and vectors currently aren't objects and very few of the
predefined types have an iterator interface.

It's easy enough to get around the lack of objectness and add the
equivalent of an iterator iterface, in either language. The fact that
lisp folks haven't bothered suggests that this isn't a big enough
issue.

The difference is that lisp users can easily define python-like for
while python folks have to wait for the implementation.

Syntax matters.
 
C

Chris Mellon

Huh? The interface could continue to be (map ...).

Python's for statement relies on the fact that python is mostly object
oriented and many of the predefined types have an iterator interface.
Lisp lists and vectors currently aren't objects and very few of the
predefined types have an iterator interface.

It's easy enough to get around the lack of objectness and add the
equivalent of an iterator iterface, in either language. The fact that
lisp folks haven't bothered suggests that this isn't a big enough
issue.

Is this where I get to call Lispers Blub programmers, because they
can't see the clear benefit to a generic iteration interface?
The difference is that lisp users can easily define python-like for
while python folks have to wait for the implementation.

Yes, but Python already has it (so the wait time is 0), and the Lisp
user doesn't.
 
C

Chris Mellon

Gee, that's back to the future with 1975 Lisp technology. Destructors
are a much better model for dealing with such things (see not *all*
good ideas come from Lisp -- a few come from C++) and I am dismayed
that Python is deprecating their use in favor of explicit resource
management. Explicit resource management means needlessly verbose
code and more opportunity for resource leaks.

The C++ folks feel so strongly about this, that they refuse to provide
"finally", and insist instead that you use destructors and RAII to do
resource deallocation. Personally, I think that's taking things a bit
too far, but I'd rather it be that way than lose the usefulness of
destructors and have to use "when" or "finally" to explicitly
deallocate resources.

This totally misrepresents the case. The with statement and the
context manager is a superset of the RAII functionality. It doesn't
overload object lifetimes, rather it makes the intent (code execution
upon entrance and exit of a block) explicit. You use it in almost
exactly the same way you use RAII in C++ (creating new blocks as you
need new scopes), and it performs exactly the same function.

Nobody in their right mind has ever tried to get rid of explicit
resource management - explicit resource management is exactly what you
do every time you create an object, or you use RAII, or you open a
file. *Manual* memory management, where the tracking of references and
scopes is placed upon the programmer, is what people are trying to get
rid of and the with statement contributes to that goal, it doesn't
detract from it. Before the with statement, you could do the same
thing but you needed nested try/finally blocks and you had to
carefully keep track of the scopes, order of object creation, which
objects were created, all that. The with statement removes the manual,
error prone work from that and lets you more easily write your intent
- which is *precisely* explicit resource management.

RAII is a good technique, but don't get caught up on the
implementation details. The fact that it's implemented via stack
objects with ctors and dtors is a red herring. The significant feature
is that it's you've got explicit, predictable resource management with
(and this is the important bit) a guarantee that code will be called
in all cases of scope exit.

The with statement does exactly the same thing, but is actually
superior because

a) It doesn't tie the resource managment to object creation. This
means you can use, for example, with lock: instead of the C++ style
Locker(lock)

and

b) You can tell whether you exited with an exception, and what that
exception is, so you can take different actions based on error
conditions vs expected exit. This is a significant benefit, it allows
the application of context managers to cases where RAII is weak. For
example, controlling transactions.
Right, but that doesn't mean that 99.9% of the time, the programmer
can't immediately tell that cycles aren't going to be an issue.

I love having a *real* garbage collector, but I've also dealt with C++
programs that are 100,000+ lines long and I wrote plenty of Python
code before it had a real garbage collector, and I never had any
problem with cyclic data structures causing leaks. Cycles are really
not all that common, and when they do occur, it's usually not very
difficult to figure out where to add a few lines to a destructor to
break the cycle.

They can occur in the most bizarre and unexpected places. To the point
where I suspect that the reality is simply that you never noticed your
cycles, not that they didn't exist.
I'm willing to pay the performance penalty to have the advantage of
not having to use constructs like "when".

"with". And if you think you won't need it because python will get
"real" GC you're very confused about what GC does and how.
Also, I'm not convinced that it has to be a huge performance hit.
Some Lisp implementations had a 1,2,3, many (or something like that)
reference-counter for reclaiming short-lived objects. This bypassed
the real GC and was considered a performance optimization. (It was
probably on a Lisp Machine, though, where they had special hardware to
help.)


All due to the ref-counter? I find this really hard to believe.
People write multi-threaded code all the time in C++ and also use
smart pointers at the same time. I'm sure they have to be a bit
careful, but they certainly don't require a GIL.

A generic threadsafe smart pointer, in fact, is very nearly a GIL. The
GIL isn't just for refcounting, though, it's also about access to the
python interpreters internal state.

For the record, the vast majority of multithreaded C++ code is
incorrect or inefficient or both.
I *would* believe that getting rid of the GIL will require some
massive hacking on the Python interpreter, though, and when doing that
it may be significantly easier to switch to having only a real GC than
having two different kinds of automatic memory management.

I vote, though, for putting in that extra work -- compatibility with
Jython be damned.

Get cracking then. You're hardly the first person to say this.
However, of the people who say it, hardly anyone actually produces any
code and the only person I know of who did dropped it when performance
went through the floor. Maybe you can do better.
It still makes perfect sense for AI research. I'm not sure that
Lisp's market share counts as "tiny". It's certainly not huge, at
only 0.669% according to the TIOBE metric, but that's still the 15th
most popular language and ahead of Cobol, Fortran, Matlab, IDL, R, and
many other languages that are still in wide use. (Cobol is probably
still around for legacy reasons, but that's not true for the other
languages I mentioned.)

There's no particular reason why Lisp is any better for AI research
than anything. I'm not familiar with the TIOBE metric, but I can
pretty much guarantee that regardless of what it says there is far
more COBOL code in the wild, being actively maintained (or at least
babysat) than there is lisp code.
Forth, eh. A chaque son gout, but I'd be willing to bet that most
Forth hackers don't believe that Forth is going to make a huge
resurgence and take over the world. And it still has something of a
place as the core of Postscript and maybe in some embedded systems.

Re Lisp, though, there used to be a joke (which turned out to be
false), which went, "I don't know what the most popular programming
language will be in 20 years, but it will be called 'Fortran'". In
reality, I don't know what the most popular language will be called 20
years from now, but it will *be* Lisp.

And everyone who still uses the language actually called Lisp will
continue to explain how it isn't a "real" lisp for a laundry list of
reasons that nobody who gets work done actually cares about.
 
J

joswig

I personally use Emacs Lisp every day and I think Hedgehog Lisp (a
tiny functional Lisp dialect intended for embedded platforms like cell
phones--the runtime is just 20 kbytes) is a very cool piece of code.
But using CL for new, large system development just seems crazy today.

It seems that many of the hardcore Lisp developers are busy developing
the core of new airline system software (pricing, reservation, ...)
in Common Lisp. It replaced already some mainframes...
Kind of crazy. I guess that counts as very large systems development.

There is sure also lots of Python involved, IIRC.
 
D

Douglas Alan

Chris Mellon said:
Is this where I get to call Lispers Blub programmers, because they
can't see the clear benefit to a generic iteration interface?

I think you overstate your case. Lispers understand iteration
interfaces perfectly well, but tend to prefer mapping fuctions to
iteration because mapping functions are both easier to code (they are
basically equivalent to coding generators) and efficient (like
non-generator-implemented iterators). The downside is that they are
not quite as flexible as iterators (which can be hard to code) and
generators, which are slow.

Lispers have long since understood how to write mapping function to
iterator converters using stack groups or continuations, but Common
Lisp never mandated stack groups or continuations for conforming
implementations. Scheme, of course, has continuations, and there are
implementations of Common Lisp with stack groups.
Yes, but Python already has it (so the wait time is 0), and the Lisp
user doesn't.

So do Lispers, provided that they use an implementation of Lisp that
has the aforementioned extensions to the standard. If they don't,
they are the unfortunately prisoners of the standardizing committees.

And, I guarantee you, that if Python were specified by a standardizing
committee, it would suffer this very same fate.

Regarding there being way too many good but incompatible
implementations of Lisp -- I understand. The very same thing has
caused Ruby to incredibly rapidly close the lead that Python has
traditionally had over Ruby. There reason for this is that there are
too many good but incompatible Python web dev frameworks, and only one
good one for Ruby. So, we see that while Lisp suffers from too much
of a good thing, so does Python, and that may be the death of it if
Ruby on Rails keeps barreling down on Python like a runaway train.

|>oug
 
A

Andy Freeman

Is this where I get to call Lispers Blub programmers, because they
can't see the clear benefit to a generic iteration interface?

The "Blub" argument relies on inability to implement comparable
functionality in "blub". (For example, C programmers don't get to
call Pythonists Blub programmers because Python doesn't use {} and
Pythonistas don't get to say the same about C programmers because C
doesn't use whitespace.) Generic iterators can be implemented by lisp
programmers and some have. Others haven't had the need.
Yes, but Python already has it (so the wait time is 0), and the Lisp
user doesn't.

"for" isn't the last useful bit of syntax. Python programmers got to
wait until 2.5 to get "with". Python 2.6 will probably have syntax
that wasn't in Python 2.5.

Lisp programmers with a syntax itch don't wait anywhere near that long.
 
L

Lenard Lindstrom

Douglas said:
Lispers have long since understood how to write mapping function to
iterator converters using stack groups or continuations, but Common
Lisp never mandated stack groups or continuations for conforming
implementations. Scheme, of course, has continuations, and there are
implementations of Common Lisp with stack groups.

Those stack groups

http://common-lisp.net/project/bknr/static/lmman/fd-sg.xml

remind me of Python greenlets

http://cheeseshop.python.org/pypi/greenlet .
 
D

Douglas Alan

This totally misrepresents the case. The with statement and the
context manager is a superset of the RAII functionality.

No, it isn't. C++ allows you to define smart pointers (one of many
RAII techniques), which can use refcounting or other tracking
techniques. Refcounting smart pointers are part of Boost and have
made it into TR1, which means they're on track to be included in the
next standard library. One need not have waited for Boost, as they can
be implemented in about a page of code.

The standard library also has auto_ptr, which is a different sort of
smart pointer, which allows for somewhat fancier RAII than
scope-based.
It doesn't overload object lifetimes, rather it makes the intent
(code execution upon entrance and exit of a block) explicit.

But I don't typically wish for this sort of intent to be made
explicit. TMI! I used "with" for *many* years in Lisp, since this is
how non-memory resource deallocation has been dealt with in Lisp since
the dawn of time. I can tell you from many years of experience that
relying on Python's refcounter is superior.

Shouldn't you be happy that there's something I like more about Python
than Lisp?
Nobody in their right mind has ever tried to get rid of explicit
resource management - explicit resource management is exactly what you
do every time you create an object, or you use RAII, or you open a
file.

This just isn't true. For many years I have not had to explicitly
close files in Python. Nor have I had to do so in C++. They have
been closed for me implicitly. "With" is not implicit -- or at least
not nearly as implicit as was previous practice in Python, or as is
current practice in C++.
*Manual* memory management, where the tracking of references and
scopes is placed upon the programmer, is what people are trying to
get rid of and the with statement contributes to that goal, it
doesn't detract from it.

As far as I am concerned, memory is just one resource amongst many,
and the programmer's life should be made easier in dealing with all
such resources.
Before the with statement, you could do the same thing but you
needed nested try/finally blocks

No, you didn't -- you could just encapsulate the resource acquisition
into an object and allow the destructor to deallocate the resource.
RAII is a good technique, but don't get caught up on the
implementation details.

I'm not -- I'm caught up in the loss of power and elegance that will
be caused by deprecating the use of destructors for resource
deallocation.
The with statement does exactly the same thing, but is actually
superior because

a) It doesn't tie the resource managment to object creation. This
means you can use, for example, with lock: instead of the C++ style
Locker(lock)

I know all about "with". As I mentioned above, Lisp has had it since
the dawn of time. And I have nothing against it, since it is at times
quite useful. I'm just dismayed at the idea of deprecating reliance
on destructors in favor of "with" for the majority of cases when the
destructor usage works well and is more elegant.
b) You can tell whether you exited with an exception, and what that
exception is, so you can take different actions based on error
conditions vs expected exit. This is a significant benefit, it
allows the application of context managers to cases where RAII is
weak. For example, controlling transactions.

Yes, for the case where you might want to do fancy handling of
exceptions raised during resource deallocation, then "when" is
superior, which is why it is good to have in addition to the
traditional Python mechanism, not as a replacement for it.
They can occur in the most bizarre and unexpected places. To the point
where I suspect that the reality is simply that you never noticed your
cycles, not that they didn't exist.

Purify tells me that I know more about the behavior of my code than
you do: I've *never* had any memory leaks in large C++ programs that
used refcounted smart pointers that were caused by cycles in my data
structures that I didn't know about.
And if you think you won't need it because python will get "real" GC
you're very confused about what GC does and how.

Ummm, I know all about real GC, and I'm quite aware than Python has
had it for quite some time now. (Though the implementation is rather
different last I checked than it would be for a language that didn't
also have refcounted GC.)
A generic threadsafe smart pointer, in fact, is very nearly a GIL.

And how's that? I should think that modern architectures would have
an efficient way of adding and subtracting from an int atomically. If
they don't, I have a hard time seeing how *any* multi-threaded
applications are going to be able to make good use of multiple
processors.
Get cracking then. You're hardly the first person to say this.
However, of the people who say it, hardly anyone actually produces
any code and the only person I know of who did dropped it when
performance went through the floor. Maybe you can do better.

I really have no desire to code in C, thank you. I'd rather be coding
in Python. (Hence my [idle] desire for macros in Python, so that I
could do even more of my work in Python.)
There's no particular reason why Lisp is any better for AI research
than anything.

Yes, there is. It's a very flexible language that can adapt to the
needs of projects that need to push the boundaries of what computer
programmers typically do.
I'm not familiar with the TIOBE metric, but I can pretty much
guarantee that regardless of what it says there is far more COBOL
code in the wild, being actively maintained (or at least babysat)
than there is lisp code.

I'm agree that there is cedrtainly much more Cobol code being
maintained than there is Lisp code, but that doesn't mean that there
are more Cobol programmers writing new code than there are Lisp
programmers writing new code. A project would have to be run by a
madman to begin a new project in Cobol.
And everyone who still uses the language actually called Lisp will
continue to explain how it isn't a "real" lisp for a laundry list of
reasons that nobody who gets work done actually cares about.

And where are you getting this from? I don't know anyone who claims
that any commonly used dialect of Lisp isn't *really* Lisp.

|>oug
 
D

Dennis Lee Bieber

Something like the current situation with Python web frameworks ;)

Not quite -- who uses two web frameworks in one application?
(besides, if you don't "from ... import *", you have module
qualification to differentiate...)

But if these "macros" are supposed to allow one to sort of extend
Python syntax, are you really going to code things like

macrolib1.keyword

everywhere?
--
Wulfraed Dennis Lee Bieber KD6MOG
(e-mail address removed) (e-mail address removed)
HTTP://wlfraed.home.netcom.com/
(Bestiaria Support Staff: (e-mail address removed))
HTTP://www.bestiaria.com/
 
G

Graham Breed

Dennis Lee Bieber wote:
But if these "macros" are supposed to allow one to sort of extend
Python syntax, are you really going to code things like

macrolib1.keyword

everywhere?

I don't see why that *shouldn't* work. Or "from macrolib1 import
keyword as foo". And to be truly Pythonic the keywords would have to
be scoped like normal Python variables. One problem is that such a
system wouldn't be able to redefine existing keywords.

Lets wait for a concrete proposal before delving into this rats'
cauldron any further.


Graham
 
D

Douglas Alan

Dennis Lee Bieber said:
But if these "macros" are supposed to allow one to sort of extend
Python syntax, are you really going to code things like

macrolib1.keyword
everywhere?

No -- I would expect that macros (if done the way that I would like
them to be done) would work something like so:

from setMacro import macro set, macro let
let x = 1
set x += 1

The macros "let" and "set" (like all macro invocations) would have to
be the first tokens on a line. They would be passed either the
strings "x = 1" and "x += 1", or some tokenized version thereof.
There would be parsing libraries to help them from there.

For macros that need to continue over more than one line, e.g.,
perhaps something like

let x = 1
y = 2
z = 3
set x = y + z
y = x + z
z = x + y
print x, y, z

the macro would parse up to when the indentation returns to the previous
level.

For macros that need to return values, a new bracketing syntax would
be needed. Perhaps something like:

while $(let x = foo()):
print x

|>oug
 
J

John Nagle

Douglas said:
No, it isn't. C++ allows you to define smart pointers (one of many
RAII techniques), which can use refcounting or other tracking
techniques. Refcounting smart pointers are part of Boost and have
made it into TR1, which means they're on track to be included in the
next standard library. One need not have waited for Boost, as they can
be implemented in about a page of code.

The standard library also has auto_ptr, which is a different sort of
smart pointer, which allows for somewhat fancier RAII than
scope-based.

Smart pointers in C++ never quite work. In order to do anything
with the pointer, you have to bring it out as a raw pointer, which makes
the smart pointer unsafe. Even auto_ptr, after three standardization
attempts, is still unsafe.

Much handwaving around this problem comes from the Boost crowd, but
in the end, you just can't do safe reference counted pointers via
C++ templates. It requires language support.

This is off topic, though, for Python. If anybody cares,
look at my postings in comp.lang.c++.std for a few years back.

Python is close to getting it right, but not quite. Python destructors
aren't airtight; you can pass the "self" pointer out of a destructor, which
"re-animates" the object. This generally results in undesirable behavior.

Microsoft's "managed C++" has the same problem. They explicitly addressed
"re-animation" and consider the possibility that a destructor can be called
twice. To see the true horror of this approach, read

http://www.codeproject.com/managedcpp/cppclidtors.asp

Microsoft Managed C++ ended up having destructors, finalizers,
explicit destruction, scope-based destruction of locals, re-animation,
and nondeterministic garbage collection, all in one language.
(One might suspect that this was intended to drive people to C#.)

In Python, if you have reference loops involving objects
with destructors, the objects don't get reclaimed at all. You don't
want to call destructors from the garbage collector. That creates
major problems, like introducing unexpected concurrency and wierd
destructor ordering issues.

Much of the problem is that Python, like Perl and Java, started out
with strong pointers only, and, like Perl and Java, weak pointers
were added as afterthoughts. Once you have weak pointers, you can
do it right. Because weak pointers went in late, there's a legacy
code problem, mostly in GUI libraries.

One right answer would be a pure reference counted system where
loops are outright errors, and you must use weak pointers for backpointers.
I write Python code in that style, and run with GC in debug mode,
to detect leaks. I modified BeautifulSoup to use weak pointers
where appropriate, and passed those patches back to the author.
When all or part of a tree is detached, it goes away immediately,
rather than hanging around until the next GC cycle. The general
idea is that pointers toward the leaves of trees should be strong
pointers, and pointers toward the root should be weak pointers.

For a truly sound system, you'd want to detect reference loops
at the moment they're created, and handle them as errors. This
is quite possible, although inefficient for certain operations.
Reversing a linked list that has depth counts is expensive. But then,
Python lists aren't implemented as linked lists; they're variable sized arrays
with one reference count for the whole array. So, in practice,
the cases where maintaining depth counts gets expensive
are rare.

Then you'd want a way to limit the scope of "self" within a destructor,
so that you can't use it in a context which could result in it
outliving the destruction of the object. This is a bit tricky,
and might require some extra checking in destructors.
The basic idea is that once the reference count has gone to 0,
anything that increments it is a serious error. (As mentioned
above, Microsoft Managed C++ allowed "re-animation", and it's
clear from that experience that you don't want to go there.)

With those approaches, destructors
would be sound, order of destruction would be well defined, and
the "here be dragons" notes about destructors could come out of
the documentation.

With that, we wouldn't need "with". Or a garbage collector.

If you like minimalism, this is the way to go.

John Nagle
 
P

Paul Rubin

Douglas Alan said:
No, you didn't -- you could just encapsulate the resource acquisition
into an object and allow the destructor to deallocate the resource.

But without the try/finally blocks, if there is an unhandled
exception, it passes a traceback object to higher levels of the
program, and the traceback contains a pointer to the resource, so you
can't be sure the resource will ever be freed. That was part of the
motivation for the with statement.
And how's that? I should think that modern architectures would have
an efficient way of adding and subtracting from an int atomically.

I'm not sure. In STM implementations it's usually done with a
compare-and-swap instruction (CMPXCHG on the x86) so you read the old
integer, increment a local copy, and CMPXCHG the copy into the object,
checking the swapped-out value to make sure that nobody else changed
the object between the copy and the swap (rollback and try again if
someone has). It might be interesting to wrap Python refcounts that
way, but really, Python should move to a compacting GC of some kind,
so the heap doesn't get all fragmented. Cache misses are a lot more
expensive now than they were in the era when CPython was first
written.
If they don't, I have a hard time seeing how *any* multi-threaded
applications are going to be able to make good use of multiple processors.

They carefully manage the number of mutable objects shared between
threads is how. A concept that doesn't mix with CPython's use of
reference counts.
Yes, there is. [Lisp] it's a very flexible language that can adapt
to the needs of projects that need to push the boundaries of what
computer programmers typically do.

Really, if they used better languages they'd be able to operate within
boundaries instead of pushing them.
 
A

Antoon Pardon

Which is very valuable... IF you care about writing a new object system. I
don't, and I think most developers don't, which is why Lisp-like macros
haven't taken off.

I find this is a rather sad kind of argument. It seems to imply that
python is only for problems that are rather common or similar to
those. If most people don't care about the kind of problem you
are working on, it seems from this kind of argument that python
is not the language you should be looking at.
 
C

Chris Mellon

No, it isn't. C++ allows you to define smart pointers (one of many
RAII techniques), which can use refcounting or other tracking
techniques. Refcounting smart pointers are part of Boost and have
made it into TR1, which means they're on track to be included in the
next standard library. One need not have waited for Boost, as they can
be implemented in about a page of code.

The standard library also has auto_ptr, which is a different sort of
smart pointer, which allows for somewhat fancier RAII than
scope-based.

Obviously. But theres nothing about the with statement that's
different than using smart pointers in this regard. I take it back,
there's one case - when you need only one scope in a function, with
requires an extra block while C++ style RAII allows you to
But I don't typically wish for this sort of intent to be made
explicit. TMI! I used "with" for *many* years in Lisp, since this is
how non-memory resource deallocation has been dealt with in Lisp since
the dawn of time. I can tell you from many years of experience that
relying on Python's refcounter is superior.

I question the relevance of your experience, then. Refcounting is fine
for memory, but as you mention below, memory is only one kind of
resource and refcounting is not necessarily the best technique for all
resources. Java has the same problem, where you've got GC so you don't
have to worry about memory, but no tools for managing non-memory
resources.
Shouldn't you be happy that there's something I like more about Python
than Lisp?

I honestly don't care if anyone prefers Python over Lisp or vice
versa. If you like Lisp, you know where it is.
This just isn't true. For many years I have not had to explicitly
close files in Python. Nor have I had to do so in C++. They have
been closed for me implicitly. "With" is not implicit -- or at least
not nearly as implicit as was previous practice in Python, or as is
current practice in C++.

You still don't have to manually close files. But you cannot, and
never could, rely on them being closed at a given time unless you did
so. If you need a file to be closed in a deterministic manner, then
you must close it explicitly. The with statement is not implicit and
never has been. Implicit resource management is *insufficient* for
the general resource management case. It works fine for memory, it's
okay for files (until it isn't), it's terrible for thread locks and
network connections and database transactions. Those things require
*explicit* resource management.
As far as I am concerned, memory is just one resource amongst many,
and the programmer's life should be made easier in dealing with all
such resources.

Which is exactly what the with statement is for.
No, you didn't -- you could just encapsulate the resource acquisition
into an object and allow the destructor to deallocate the resource.

If you did this in Python, your code was wrong. You were coding C++ in
Python. Don't do it.
I'm not -- I'm caught up in the loss of power and elegance that will
be caused by deprecating the use of destructors for resource
deallocation.

Python has *never had this*. This never worked. It could seem to work
if you carefully, manually, inspected your code and managed your
object lifetimes. This is much more work than the with statement.

To the extent that your code ever worked when you relied on this
detail, it will continue to work. There are no plans to replace
pythons refcounting with fancier GC schemes that I am aware of.
I know all about "with". As I mentioned above, Lisp has had it since
the dawn of time. And I have nothing against it, since it is at times
quite useful. I'm just dismayed at the idea of deprecating reliance
on destructors in favor of "with" for the majority of cases when the
destructor usage works well and is more elegant.

Nothing about Pythons memory management has changed. I know I'm
repeating myself here, but you just don't seem to grasp this concept.
Python has *never* had deterministic destruction of objects. It was
never guaranteed, and code that seemed like it benefited from it was
fragile.
Yes, for the case where you might want to do fancy handling of
exceptions raised during resource deallocation, then "when" is
superior, which is why it is good to have in addition to the
traditional Python mechanism, not as a replacement for it.

"with". And it's not replacing anything.
Purify tells me that I know more about the behavior of my code than
you do: I've *never* had any memory leaks in large C++ programs that
used refcounted smart pointers that were caused by cycles in my data
structures that I didn't know about.

I'm talking about Python refcounts. For example, a subtle resource
leak that has caught me before is that tracebacks hold references to
locals in the unwound stack. If you relied on refcounting to clean up
a resource, and you needed exception handling, the resource wasn't
released until *after* the exception unwound, which could be a
problem. Also holding onto tracebacks for latter processing (not
uncommon in event based programs) would artificially extend the
lifetime of the resource. If the resource you were managing was a
thread lock this could be a real problem.
And if you think you won't need it because python will get "real" GC
you're very confused about what GC does and how.

Ummm, I know all about real GC, and I'm quite aware than Python has
had it for quite some time now. (Though the implementation is rather
different last I checked than it would be for a language that didn't
also have refcounted GC.)
A generic threadsafe smart pointer, in fact, is very nearly a GIL.

And how's that? I should think that modern architectures would have
an efficient way of adding and subtracting from an int atomically. If
they don't, I have a hard time seeing how *any* multi-threaded
applications are going to be able to make good use of multiple
processors.
Get cracking then. You're hardly the first person to say this.
However, of the people who say it, hardly anyone actually produces
any code and the only person I know of who did dropped it when
performance went through the floor. Maybe you can do better.

I really have no desire to code in C, thank you. I'd rather be coding
in Python. (Hence my [idle] desire for macros in Python, so that I
could do even more of my work in Python.)

In this particular conversation, I really don't think that theres much
to say beyond put up or shut up. The experts in the field have said
that it's not practical. If you think they're wrong, you're going to
need to prove it with code, not by waving your hand.
Yes, there is. It's a very flexible language that can adapt to the
needs of projects that need to push the boundaries of what computer
programmers typically do.

That doesn't make Lisp any better at AI programming than it is for
writing databases or spreadsheets or anything else.
I'm agree that there is cedrtainly much more Cobol code being
maintained than there is Lisp code, but that doesn't mean that there
are more Cobol programmers writing new code than there are Lisp
programmers writing new code. A project would have to be run by a
madman to begin a new project in Cobol.

More than you'd think, sadly. Although depending on your definition of
"new project", it may not count. There's a great deal of new code
being written in COBOL to run on top of old COBOL systems.
And where are you getting this from? I don't know anyone who claims
that any commonly used dialect of Lisp isn't *really* Lisp.

The language of the future will not be Common Lisp, and it won't be a
well known dialect of Lisp. It will have many Lisp like features, and
"true" Lispers will still claim it doesn't count, just as they do
about Ruby and Python today.
 
A

Andy Freeman

One right answer would be a pure reference counted system where
loops are outright errors, and you must use weak pointers for backpointers.
... The general
idea is that pointers toward the leaves of trees should be strong
pointers, and pointers toward the root should be weak pointers.

While I agree that weak pointers are good and can not be an
afterthought, I've written code where "back" changed dynamically, and
I'm pretty sure that Nagle has as well.

Many programs with circular lists have an outside pointer to the
current element, but the current element changes. All of the links
implementing the list have to be strong enough to keep all of the list
alive.

Yes, one can implement a circular list as a vector with a current
index, but that has space and/or time consequences. It's unclear that
that approach generalizes for more complicated structures. (You can't
just pull all of the links out into such lists.)

In short, while disallowing loops with strong pointers is "a" right
answer, it isn't always a right answer, so it can't be the only
answer.

-andy
 
J

John Nagle

Andy said:
On Jun 27, 11:41 pm, John Nagle <[email protected]> wrote:
While I agree that weak pointers are good and can not be an
afterthought, I've written code where "back" changed dynamically, and
I'm pretty sure that Nagle has as well.

That sort of thing tends to show up in GUI libraries, especially
ones that have event ordering issues. It's a tough area.
Many programs with circular lists have an outside pointer to the
current element, but the current element changes. All of the links
implementing the list have to be strong enough to keep all of the list
alive.
Yes, one can implement a circular list as a vector with a current
index, but that has space and/or time consequences.

We used to see things like that back in the early 1980s, but today,
worrying about the space overhead associated with keeping separate
track of ownership and position in a circular buffer chain isn't
a big deal. I last saw that in a FireWire driver, and even there,
it wasn't really necessary.

John Nagle
 
A

Andy Freeman

That sort of thing tends to show up in GUI libraries, especially
ones that have event ordering issues. It's a tough area.

It shows up almost anywhere one needs to handle recurring operations.
It also shows up in many dynamic search structures.
We used to see things like that back in the early 1980s, but today,
worrying about the space overhead associated with keeping separate
track of ownership and position in a circular buffer chain isn't
a big deal.

Insert and delete can be a big deal. O(1) is nice.
 

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,143
Latest member
SterlingLa
Top