Favorite non-python language trick?

B

Bernhard Herzog

Scott David Daniels said:
Actually this is the old (and terrifying) Smalltalk message 'becomes:'.
There is a concrete reason it is not in python: objects are represented
as pointers to their data structures, do not have identical sizes, and
therefore cannot be copied into each others data space.

That limitation applies only some of the time, e.g. when you inherit
from built-in types. But for ordinary classes it can be done:
.... def __init__(self, x):
.... self.x = x
.... .... def __init__(self, y):
.... self.y = y
.... True


Bernhard
 
S

Steven D'Aprano

With the exception of reduce(lambda x,y:x*y, sequence), reduce can be
replaced with sum, and Guido wants to add a product function.

How do you replace:

reduce(lambda x,y: x*y-1/y, sequence)

with sum?


Inquiring minds want to know.
 
C

Christopher Subich

Steven said:
How do you replace:

reduce(lambda x,y: x*y-1/y, sequence)

with sum?

You don't, but an almost equally short replacement works just as well,
and doesn't even need the lambda:9.3326215443944096e+155

Sure, this isn't a sum, but I'd argue that the for loop solution is
superior:

1) For single expressions, the guts of the operation is still a single line
2) This completely avoids lambda -- while I myself am ambivalent about
the idea of lambda going away, lambda syntax can get hairy for
complicated expressions -- the comma changes meaning halfway through the
expression, from 'parameter delimiter in lambda' to 'next parameter in
reduce'
3) This trivially extends to a block of code, which a lambda doesn't
4) Behavior for zero- and one-length lists is explicit and obvious.

There are, of course, a few disadvantages, but I think they're more
corner corner cases.
1) This solution obviously isn't itself an expression (although the
result is a single variable), so it can't be used in totality as a
component to a larger call.
[Rebuttal: When exactly would this be a good thing, anyway? Reduce
statements are at least 11 characters long, 13 with a one-character
default value. Using this as a parameter to just about anything else,
even a function call, seems a bit unreadable to me.]
2) An explicit intermediate/result value is needed. This seems to be
more of a 'cleanliness' argument than anything.

Besides, rewriting this as a for loop actually improves performance: j = 0.0
for x in sequence: j = j*x+1/x
return j
starttime = time.time()
for i in xrange(n):
f()
print time.time()-starttime3.67499995232

Making the series bigger results in even worse relative performance (no
idea why):125.491000175

So really, 'reduce' is already useless for large anonymous blocks of
code (which can't be defined in lambdas), and it seems slower than 'for
... in' for even simple expressions.
 
C

Christopher Subich

Devan said:
sum(sequence[0] + [1/element for element in sequence[1:]])

I think that should work.

That won't work, because it misses the x*y part of the expression
(x[n]*x[n+1] + 1/x[n+1], for people who haven't immediately read the
grandparent).

Personally, I think demanding that it be writable as a sum (or product,
or any, or all) is a false standard -- nobody's claimed that these would
replace all cases of reduce, just the most common ones.
 
J

James

Steven said:
What advantages do Pascal-like for loops give over Python for loops?

The only two I can think of are trivial:

(1) the Pascal-like for loop is six characters shorter to type:

for i = 1 to 10: # 16 chars
for i in range(1, 10): # 22 chars

(2) for very large ranges, you don't have to hold the entire list of
integers in memory. But you can use xrange() instead of range(), which
also doesn't hold the entire list in memory.

First, I was never concerned about what I can do with the new keyword
that I could not do without since most languages are turing complete.
The question is that of elegance and it is subjective. I do know that
if Python kept adding features that everyone liked, it will end up like
Perl (although D seems to be doing a great job incorporating everyone's
favorite features while still retaining language consistancy. Then
again, it is very new.). But this thread is about favorite features
from other languages.

Secondly, the point was about ranges. Their use in loops (as in Ada and
Ruby) is just one instance. For me, they are more readable in the same
way I like -- for comments as in Eiffel and Ada rather than // and #
which do the same.

Explain please.

Design By Contract (as in Eiffel and D)
Explain please.

As in Delphi
http://www.delphibasics.co.uk/RTL.asp?Name=With

There have been some PEPs regarding this.
 
S

Steven D'Aprano

Devan said:
sum(sequence[0] + [1/element for element in sequence[1:]])

I think that should work.

That won't work, because it misses the x*y part of the expression
(x[n]*x[n+1] + 1/x[n+1], for people who haven't immediately read the
grandparent).

Personally, I think demanding that it be writable as a sum (or product,
or any, or all) is a false standard -- nobody's claimed that these would
replace all cases of reduce, just the most common ones.


Er, excuse me, but that is EXACTLY what Devan claimed.

Quote: "With the exception of reduce(lambda x,y:x*y, sequence), reduce can be
replaced with sum, and Guido wants to add a product function."
 
P

Peter Otten

Steven said:
How do you replace:

reduce(lambda x,y: x*y-1/y, sequence)

with sum?

missing = object()

def my_reduce(f, items, first=missing):
class adder:
def __init__(self, value):
self.value = value
def __add__(self, other):
return adder(f(self.value, other))

if first is missing:
items = iter(items)
try:
first = items.next()
except StopIteration:
raise TypeError
return sum(items, adder(first)).value

if __name__ == "__main__":
sequence = map(float, range(10))
r = reduce(lambda x, y: x*y-1/y, sequence)
s = my_reduce(lambda x, y: x*y-1/y, sequence)
assert r == s

:)

Peter
 
C

Christopher Subich

Steven said:
Er, excuse me, but that is EXACTLY what Devan claimed.

Quote: "With the exception of reduce(lambda x,y:x*y, sequence), reduce can be
replaced with sum, and Guido wants to add a product function."

Okay, then... "not many people have claimed that sum is a universal
replacement for reduce, only the most common cases." It's further
argued that the uncommon cases are more flexible and (again, mostly)
anywhere from only slightly less readable to significantly more readable
in for-loop form.

The only corner case that isn't, so far as I know, is when the reduce()
has no default initial value and the sequence/generator might possibly
have 0 elements. But that's a TypeError anyway.
 
D

Devan L

Okay, maybe that was too restrictive, reduce can *usually* be replaced
with sum. Sorry about that.
 
E

Edvard Majakari

(sorry, my NUA had lost the original article)
Ability to tag some methods 'deprecated' as in Java (from 1.5
onwards?). However, Python interpreter doesn't have to do it: pydoc and
similar tools could detect, say, '@deprecated' in method comment string and
warn user about it.

Currently I just document deprecated methods, and if I feel like it, I also
add

def some_method_which_is_badly_named_or_just_plain_wrong(..)
"""docstring

This method is now deprecated. Use frob() instead.
"""

sys.stderr.write('warning: method some_method_which_is_badly_named_or_just_plain_wrong is now deprecated')
 
T

Thomas Heller

Edvard Majakari said:
(sorry, my NUA had lost the original article)

Ability to tag some methods 'deprecated' as in Java (from 1.5
onwards?). However, Python interpreter doesn't have to do it: pydoc and
similar tools could detect, say, '@deprecated' in method comment string and
warn user about it.

I don't see what's wrong with this code, and if one wanted, one could
also implement a decorator which calls warnings.warn when the function
is called:

def c_buffer(init, size=None):
"deprecated, use create_string_buffer instead"
import warnings
warnings.warn("c_buffer is deprecated, use create_string_buffer instead",
DeprecationWarning, stacklevel=2)
return create_string_buffer(init, size)

Thomas
 
E

Edvard Majakari

Thomas Heller said:
I don't see what's wrong with this code, and if one wanted, one could
also implement a decorator which calls warnings.warn when the function
is called:

def c_buffer(init, size=None):
"deprecated, use create_string_buffer instead"
import warnings
warnings.warn("c_buffer is deprecated, use create_string_buffer instead",
DeprecationWarning, stacklevel=2)
return create_string_buffer(init, size)

Well, nothing's wrong there, and the same could be done with Java
before. However, having a consistent deprecated string everywhere allows
easier eg. automatic finding of such methods from documentation.

Decorators also help here, but that requires version 2.3 or newer (which
usually isn't a problem, but can be)

Hey! I hadn't realized category parameter nor stacklevel in warnings module
(just used a few times, never read the doc because I didn't need to). Neat,
thanks.

--
# Edvard Majakari Software Engineer
# PGP PUBLIC KEY available Soli Deo Gloria!

$_ = '456476617264204d616a616b6172692c20612043687269737469616e20'; print
join('',map{chr hex}(split/(\w{2})/)),uc substr(crypt(60281449,'es'),2,4),"\n";
 
S

Shai

I only saw this today... sorry about the late response. Anyway,
replying to your two messages at once:

Mike said:
Last time I checked, dynamic binding variables were frowned on in LISP
systems as well. Scheme doesn't have them. Common LISP requires
special forms to use them.

They're called "Special vars", and you need to define them (unlike
local LISP variables, which behave essentially like Python vars), but
then you use them just like other vars (that is, you usually bind them
with LET). This is the first I hear about them being ill-considered in
LISP; http://www.gigamonkeys.com/book/ is a recently published LISP
book which recommends them. I don't know about Scheme, but I think it
does have them.

The one "special" thing you see in every use of these vars in LISP is a
naming convention; as LISP symbols can contain most characters, they
are usually named with asterisks on both ends to distinguish them.
Thus, in the example above, the dynamic var would be named "*x*".
The problem with the given use case is that it lets every routine in
the call chain substitute it's own variable for the library parameter
you want to use, with no local indication that this is going
on. This makes bugs in dynamically scoped variables a PITA to find.

In LISP, the naming convention indeed takes care of that; and indeed, I
consider taking the LISP way would be better. The definition of x as
dynamic would then be not in bar nor its callers, but in the definition
of x, as in

dynamic x=10
def bar():
print x

I specified the syntax as I did, specifically to make it match the
current definition of globals, which "enjoys" the same problems you
noted with my dynamics.
Here's the problem with that. Consider this script:

import foo
x = 10
def bar():
print x

foo.foogle(bar)

If foo.foogle includes "dynamic x" and then invokes bar, bar could
print anything. This makes the behavior of bar unpredictable by
examining the sourc, with no hint that that is going on.
While I didn't write it explicitly, if both LISP and Python globals are
to be followed, the dynamic x should somehow be defined in the scope of
its module. On second thought, this means "dynamic" _must_ be added in
the variable definition, for foo.foogle will simply access it as
"othermodule.x", which doesn't differentiate globals from dynamics.

Either way, Python as it is now allows foo.foogle to change x even
without dynamic variables; it is accessible as barmodule.x. bar()
should expect to have other functions mess with its globals, and
dynamics are no different.
That sounds like a fine requirement. Now, with my corrected
proposition, it would be implementable at the module-object level, so
that only module which use the feature, and modules which use them,
would be affected.
Here's a proposal for dynamically bound variables that you should be
able to implement without affecting the runtime behavior of code that
doesn't use it.

Instead of dynamic meaning "all references to the named variable(s)
will be dynamic until this function exits", have it mean "the named
variable(s) will be dynamic in this function." Whether it should only
check local variables in the calling routines, check local + global,
or check for all free variables, is an open question.

I.e. - your example would be written:

x = 10
def foo():
dynamic x
print x

def bar():
x = 11
foo()

def baz():
bar() # prints 11
foo() # Possibly an error?

This introduces the same problem you noted with my original proposal,
but in reverse: Now, in bar(), you define and use a local variable, and
suddenly some library function changes its behavior misteriously.
For my example above, bar would *always* print 10. Nothing that
foo.foogle did would change that. However, you could write:

import foo
def bar():
dynamic x
print x

foo.foogle(bar)

In this case, bar will print whatever foo.foogle sets x to - and it's
noted in the source to bar. This means that functions that don't
declare a dynamic variable can be compiled to the same code they are
compiled to now.

This is, I believe, disproved by my comment above.

Thanks for your time and effort,

Shai.
 
M

Mike Meyer

Shai said:
They're called "Special vars", and you need to define them (unlike
local LISP variables, which behave essentially like Python vars), but
then you use them just like other vars (that is, you usually bind them
with LET). This is the first I hear about them being ill-considered in
LISP; http://www.gigamonkeys.com/book/ is a recently published LISP
book which recommends them. I don't know about Scheme, but I think it
does have them.

I'm pretty sure scheme doesn't have dynamically bound variables. I
just went through r5rs to check, and couldn't find them.
dynamic x=10
def bar():
print x

I specified the syntax as I did, specifically to make it match the
current definition of globals, which "enjoys" the same problems you
noted with my dynamics.

This looks different from what I understood before. You're now
declaring the variable dynamic in the global scope, rather than in the
function that makes it dynamic. This is a *much* more palatable
situation.

Globals are lexically scoped. As such, you can find the defintion of
the variable by examining the module that includes the function. Yes,
other modules can reach into your module and change them - but you can
find those, because they reference your module by name.

A dynamic variable declared so in a function has no such clue
associated with it. If the variable is declared dynamic in the module
of the enclosed function, that provides a contextual clue.

Of course, you can do pretty much anything you want if you're willing
to grovel over the guts of the environment enough, but some things
shouldn't be easy.
While I didn't write it explicitly, if both LISP and Python globals are
to be followed, the dynamic x should somehow be defined in the scope of
its module. On second thought, this means "dynamic" _must_ be added in
the variable definition, for foo.foogle will simply access it as
"othermodule.x", which doesn't differentiate globals from dynamics.

Either way, Python as it is now allows foo.foogle to change x even
without dynamic variables; it is accessible as barmodule.x. bar()
should expect to have other functions mess with its globals, and
dynamics are no different.

The question is, how hard is it to find the other people who are
messing with bar()'s globals? Normal usage with the current situation
makes it fairly starightforward. How does adding dynamicly bound
variables change this?

Of course, if you add the requirement that the variable be tagged in
the scope of the module, you're making things a lot better. That way,
readers of the module know that they need to look back along the call
chain for functions that use such variables. That makes them much
saner to find.
This introduces the same problem you noted with my original proposal,
but in reverse: Now, in bar(), you define and use a local variable, and
suddenly some library function changes its behavior misteriously.

True. That pretty much kills my proposal.

<mike
 
S

Shai

Mike said:
I'm pretty sure scheme doesn't have dynamically bound variables. I
just went through r5rs to check, and couldn't find them.

Yes, you're right. One learns.
This looks different from what I understood before. You're now
declaring the variable dynamic in the global scope, rather than in the
function that makes it dynamic. This is a *much* more palatable
situation.

This is indeed different from what I said first. It copies the Common
LISP construct without regard to consistency with the Python global
construct.
Globals are lexically scoped. As such, you can find the defintion of
the variable by examining the module that includes the function. Yes,
other modules can reach into your module and change them - but you can
find those, because they reference your module by name.

A dynamic variable declared so in a function has no such clue
associated with it. If the variable is declared dynamic in the module
of the enclosed function, that provides a contextual clue.

In my original proposal, dynamic variables are seen as globals from the
function in their module which reads them; no more, no less. The
important point I want from dynamic scope is the time-locality of
asignments, that
is, the fact that they are undone when the (lexical) scope of the new
binding ends. This allows the use of globals, with a lot less fear of
unintended interactions between users of the module (well, this is only
accurate until multithreading enters the picture, but that can be
handled
too).

[rest snipped]
 
E

Edvard Majakari

Simon Brunning said:

Neat.

I guess about 75% about programming-related things classified as neat-o or
"convenient!" are already implemented by some Pythonista(s). Spoils all the
fun for reinventing the wheel, doesn't it. :)

--
# Edvard Majakari Software Engineer
# PGP PUBLIC KEY available Soli Deo Gloria!

$_ = '456476617264204d616a616b6172692c20612043687269737469616e20'; print
join('',map{chr hex}(split/(\w{2})/)),uc substr(crypt(60281449,'es'),2,4),"\n";
 
R

Robert Kern

Edvard said:
Neat.

I guess about 75% about programming-related things classified as neat-o or
"convenient!" are already implemented by some Pythonista(s). Spoils all the
fun for reinventing the wheel, doesn't it. :)

Doesn't seem to stop most Pythonistas from trying, though. :)

--
Robert Kern
(e-mail address removed)

"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,756
Messages
2,569,533
Members
45,007
Latest member
OrderFitnessKetoCapsules

Latest Threads

Top