reduce() anomaly?

  • Thread starter Stephen C. Waterbury
  • Start date
D

Douglas Alan

Alex Martelli said:
Robin Becker wrote:
Bingo! You disagree with the keystone of Python's philosophy. Every
other disagreement, quite consequently, follows from this one.

The "only one way to do it" mantra is asinine. It's like saying that
because laissez faire capitalism (Perl) is obviously wrong that
communism (FP) is obviously right. The truth lies somewhere in the
middle.

The mantra should be "small, clean, simple, powerful, general,
elegant". This, however, does not imply "only one way to do it",
because power and generality often provide for multiple "right" ways
to flourish. In fact, trying to enforce that there be only "one way
to do it", will make your language bigger, messier, more complicated,
less powerful, less general, and uglier, as misguided souls rush to
remove powerful and general tools like reduce() from the language, and
fill it up with special-purpose tools like sum() and max().

People have written entire articles on how to do functional
programming in Python:

http://www-106.ibm.com/developerworks/linux/library/l-prog.html

You would castrate Python so that this is not possible? Then you
would diminish Python, by making it a less general, less elegant
language, that has become unsuitable as a language for teaching CS101,
and only suitable for teaching How Alex Martelli Says You Should
Program 101.

|>oug
 
A

Alan Kennedy

[Robin Becker]
[Alex Martelli]
[Douglas Alan]
The "only one way to do it" mantra is asinine.

I hate to interrupt anybody's free flowing argument, but isn't it the
case that Guido never said "There should be only one way to do it"?

My understanding of the "Pythonic Philosophy" is that "there should be
only one *obvious* way to do it", which is quite a different thing
entirely.

This philosophy is aimed at making python easy for newbies: they
shouldn't get confused by a million and one different possible
approaches. There *should* (not "must"!) be a simple and obvious way
to solve the problem.

Once one is familiar with the language, and all of the subtle power it
encompasses, anything goes in relation to implementing an algorithm.

Just my €0,02.
 
D

Dave Brueck

The whole 'only one way to do it' concept is almost certainly wrong.
The "only one way to do it" mantra is asinine. It's like saying that
because laissez faire capitalism (Perl) is obviously wrong that
communism (FP) is obviously right. The truth lies somewhere in the
middle.

Part of the problem here is that just saying "only one way to do it" is a
horrible misquote, and one that unfortunately misses IMO some of the most
important parts of that "mantra":

c:\>python
Python 2.3.2 (#49, Oct 2 2003, 20:02:00) [MSC v.1200 32 bit (Intel)] on
Type "help", "copyright", "credits" or "license" for more information.The Zen of Python, by Tim Peters
[snip]
There should be one-- and preferably only one --obvious way to do it.

-Dave
 
A

Alex Martelli

Robin Becker wrote:
...
no disagreement, reduce is in line with that philosophy sum is a
shortcut and as others have said is less general.

'sum' is _way simpler_: _everybody_ understands what it means to sum a
bunch of numbers, _without_ necessarily having studied computer science.

The claim, made by somebody else, that _every_ CS 101 course teaches the
functionality of 'reduce' is not just false, but utterly absurd: 'reduce',
'foldl', and similar higher-order functions, were not taught to me back when
_I_ took my first university exam in CS [it used Fortran as te main
language], they were not taught to my son in _his_ equivalent course [it
used Pascal], and are not going to be taught to my daughter in _her_
equivalent course [it uses C]. Google for "CS 101" and convince yourself
of how utterly absurd that claim is, if needed -- how small is the
proportion of "CS 101" courses that teach these subjects.

Python's purpose is not, and has never been, to maximize the generality
of the constructs it offers. For example, Ruby's hashes (and, I believe,
Perl's) are more general than Python's dicts, because in those hashes
you can use arbitrary mutable keys -- e.g., arrays (Ruby's equivalent of
Python's lists), strings (which in Ruby are mutable -- more general than
Python's strings, innit?), etc. Python carefully balances generality,
simplicity, and performance considerations. Every design is a series of
compromise decisions, and Python's design is, in my opinion, the best
one around (for my purposes) because those compromises are struck with
an _excellent_ batting average (not perfectly, but better than any other
language I've ever studied, or designed myself). The underlying idea
that there should preferably be ONE obvious way to express a solution
is part of what has kept Python so great as it evolved during the years.

not so, I agree that there ought to be at least one way to do it.

But not with the parts that I quoted from the "spirit of C", and I
repeat them because they were SO crucial in the success of C as a
lower-level language AND are similarly crucial in the excellence of
Python as a higher-level one -- design principles that are *VERY*
rare among computer languages and systems, by the way:

Keep the language small and simple.

Provide only one way to do an operation.

"Only one way" is of course an _ideal_ goal (so I believe the way
it's phrased in Python, "preferably only one obvious way") -- but
it's a guiding light in the fog of languages constructed instead
according to YOUR completely oppposite goal, and I quote you:

There should be maximal freedom to express algorithms.

Choose just about every OTHER language on Earth, and you'll find
it TRIES (with better or worse results depending on how well or
badly it was designed, of course) to meet your expressed goal.

But NOT Python: you're using one of the _extremely few_ languages
that expressly do NOT try to provide such "maximal freedom", that
try instead to stay small and simple and provide (preferably)
only one (obvious) way to do an operation. Your choice of language
is extremely peculiar in that it _contradicts_ your stated goal!
... you may be right, but I object to attempts to restrict my existing
freedoms at the expense of stability of Python as a whole.

Nobody restricts your existing freedom of using Python 2.3.2 (or
whatever other release you prefer) and all of its constructs and
built-ins; nobody ever proposed retroactively changing the license
to do that (and I doubt it could be done even if anyone wished!).

But we're talking about Python 3.0, "the point at which backwards
compatibility will be broken" -- the next _major_ release. To quote
Guido, in 3.0 "We're throwing away a lot of the cruft that Python has
accumulated." After a dozen years of backwards compatible growth,
Python has a surprisingly small amount of such cruft, but it definitely
does have some. Exactly _what_ qualifies as 'cruft' is not yet decided,
and it won't be for quite a while (Guido thinks he won't do 3.0 until
he can take PSF-financed time off to make sure he does it right). But
there is no doubt that "reduce feature duplication" and "change rules
every so slightly to benefit optimization" _are_ going to be the
themes of 3.0.

Python can't keep growing with great new ideas, _AND_ still be a
small and simple language, without shedding old ideas that do not
pull their weight any more, if they ever did. Check out, e.g.,
the "python regrets" talk of well over a year ago,
http://www.python.org/doc/essays/ppt/regrets/PythonRegrets.ppt
to see that lambda, map, filter, and reduce are all among those
regrets -- things that Guido believe he never should have allowed
in the language in the first place. E.g., and I quote from him:

"""
reduce()
nobody uses it, few understand it
a for loop is clearer & (usually) faster
"""

and that was way BEFORE sum took away the vast majority of reduce's
use cases -- so, guess how he may feel about it now...?

One of Python's realities is that _Guido decides_. Not without lots
of pressure being put on him each and every time he does decide, of
course -- that's part of why he doesn't read c.l.py any more, because
the pressure from this venue had gotten way excessive. Of course,
he's going to be pressured on each and every one of the items he
mentions in detail in the "regrets" talk and more summarily in the
Python 3.0 "State of the Python Union" talk. But I'm surely not the
only one convinced that here, like in (by far) most difficult design
decisions in Python's past, he's on the right track. Python does
need to keep growing (languages that stop growing die), but it must
not become big, so it must at long last lose SOME of the accumulated
cruft, the "feature duplication".

I'll deeply regret, come Python 3.0, not being able to code
"if blah(): fleep()"
on one single like any more, personally. And I may try to put on
pressure for a last-minute reprieve for my own pet "duplicated
feature", of course. But in the end, if I use Python it's because
I believe Guido is a better language designer than I am (and that
most other language designers are), so I will accept and respect
his decisions (and maybe keep whining about it forevermore, as I
do for the "print>>bah,gorp" one:).

I am not attempting to restrict anyone or change anyone's programming
style. I just prefer to have a stable language.

I think Python's stability is superb, but stability cannot mean that
there will never be a 3.0 release, or that the language will have to
carry around forever any mistaken decision that was once taken. I'm
not advocating a "high level of churn" or anything like that: we have
extremely "sedate" and stable processes to gradually deprecate old
features. But such deprecation _will_ happen -- of that, there is
most definitely no doubt.


Alex
 
D

Douglas Alan

Dave Brueck said:
Part of the problem here is that just saying "only one way to do it" is a
horrible misquote, and one that unfortunately misses IMO some of the most
important parts of that "mantra":

Well, perhaps anything like "only one way to do it" should be removed
from the mantra altogether, since people keep misquoting it in order
to support their position of removing beautiful features like reduce()
from the language.

|>oug
 
D

Dave Brueck

Part of the problem here is that just saying "only one way to do it" is
a
Well, perhaps anything like "only one way to do it" should be removed
from the mantra altogether, since people keep misquoting it in order
to support their position of removing beautiful features like reduce()
from the language.

You're joking, right? It is one of the key aspects of Python that makes the
language such a good fit for me. Changing the philosophy because a *few*
people don't "get it" or because they are apt to misquote it seems crazy.

-Dave

P.S. If reduce() were removed, none of my code would break. ;-)
 
D

Douglas Alan

You're joking, right? It is one of the key aspects of Python that makes the
language such a good fit for me. Changing the philosophy because a *few*
people don't "get it" or because they are apt to misquote it seems crazy.

Of course I am not joking. I see no good coming from the mantra, when
the mantra should be instead what I said it should be: "small, clean,
simple, powerful, general, elegant" -- not anything like, "there
should be only one way" or "one right way" or "one obviously right
way". I have no idea what "one obviously right way" is supposed to
mean (and I don't want to have to become Dutch to understand it)
without the language being overly-restricted to the point of
uselessness like FP is. Even in FP, I doubt that there is always, or
even typically one obviously right way to accomplish a goal. To me,
there is never *one* obviously "right way" to do anything -- the world
(and the programming languages I chose to use) offer a myriad of
possible adventures, and I would never, ever want it to be otherwise.

|>oug
 
D

David Eppstein

Douglas Alan said:
Well, perhaps anything like "only one way to do it" should be removed
from the mantra altogether, since people keep misquoting it in order
to support their position of removing beautiful features like reduce()
from the language.

I think the more relevant parts of the zen are:
Readability counts.
Although practicality beats purity.

The argument is that reduce is usually harder to read than the loops it
replaces, and that practical examples of it other than sum are sparse
enough that it is not worth keeping it just for the sake of
functional-language purity.
 
D

Dave Brueck

Part of the problem here is that just saying "only one way to do
crazy.

Of course I am not joking. I see no good coming from the mantra, when
the mantra should be instead what I said it should be:

Nah, I don't really like your version. Also, the "only one way to do it"
misquote has been singled out when it really should be considered in the
context of the other items in that list - for whatever reason (maybe to
contrast with Perl, I don't know) it's been given a lot of weight in c.l.py
discussion threads.
"small, clean, simple, powerful, general, elegant"

It's really a matter of taste - both "versions" mean about the same to me
(and to me both mean "get rid of reduce()" ;-) ).
To me, there is never *one* obviously "right way" to do anything

Never? I doubt this very much. When you want to add two numbers in a
programming language, what's your first impulse? Most likely it is to write
"a + b". The same is true of a lot of other, even much more complex, things.
And IMO that's where this principle of an obvious way to do things comes
into play, and it's tightly coupled with the principle of least surprise. In
both cases they are of course just guiding principles or ideals to shoot
for, so there will always be exceptions (not to mention the fact that what
is obvious to one person isn't universal, in the same way that "common
sense" is rarely common).

Having said that though, part of the appeal of Python is that it hits the
nail on the head surprisingly often: if you don't know (from prior
experience) how to do something in Python, your first guess is very often
correct. Correspondingly, when you read someone else's Python code that uses
some feature you're not familiar with, odds are in your favor that you'll
correctly guess what that feature actually does.

And that is why I wouldn't be sad if reduce() were to disappear - I don't
use reduce() and _anytime_ I see reduce() in someone's code I have to slow
way down and sort of rehearse in my mind what it's supposed to do and see if
I can successfully interpret its meaning (and, when nobody's looking, I
might even replace it with a for-loop!). Of course that would be different
if I had a history of using functional programming languages, which I don't.
That's the line Guido walks: trying to find just the right combination of
different-but-better and intuitive-for-most-people, and the aforementioned
items from the Zen of Python are a way of expressing that.

-Dave
 
J

John Roth

David Eppstein said:
I think the more relevant parts of the zen are:
Readability counts.
Although practicality beats purity.

The argument is that reduce is usually harder to read than the loops it
replaces, and that practical examples of it other than sum are sparse
enough that it is not worth keeping it just for the sake of
functional-language purity.

IMO, this arguement is basically religious, that is, it is not based
on common sense. Apply, lambda, map, filter and reduce never constituted
a complete set of functional programming constructs, so trying to
make them so for the sake of the arguement is, basically, silly.

Apply was absorbed into the language core with a small change
in function call specifications. Good idea - it gets rid of a built-in
function.
Map and filter were (almost) obsoleted by list comprehensions and the zip
built-in function. Whether or not list comprehensions are clearer than map
and filter is debatable, but the only thing we lost in the transition was
map's
capability of processing lists of different lengths.

Sum is not an adequate replacement for reduce, regardless of the
performance benefits. Something similar to a list comprehension would
be. I don't, at this point, have a good syntax to suggest though.

A not so good example would be:

numbers = [1, 2, 3, 4]
result = [x: x + i for i in numbers]

The ":" signals that the result is a single object, not a list of
objects. The first list element is bound to that label, and then
the expression is evaluated for the rest of the elements of the list(s).

The problem with the syntax is the brackets, which suggest that the
result should be a list.

John Roth
 
B

Bob Gailer

I am glad to hear others rise to the "defense" of reduce(). I too am
reluctant to see it leave the language, as I'd have to rewrite some of my
code to accommodate the change.

The use of zip(seq[1:], [:-1]) to me is more obscure, and
memory/cpu-expensive in terms of creating 3 new lists.

Bob Gailer
(e-mail address removed)
303 442 2625
 
A

Anton Vredegoor

David Eppstein said:
The argument is that reduce is usually harder to read than the loops it
replaces, and that practical examples of it other than sum are sparse
enough that it is not worth keeping it just for the sake of
functional-language purity.

One argument in favor of reduce that I haven't seen anywhere yet is
that it is a kind of bottleneck. If I can squeeze my algorithm through
a reduce that means that I have truly streamlined it and have removed
superfluous cruft. After having done that or other code mangling
tricks -like trying to transform my code into a one liner- I usually
have no mental problems refactoring it back into a wider frame.

So some things have a use case because they require a special way of
thinking, and impose certain restrictions on the flow of my code.
Reduce is an important cognitive tool, at least it has been such for
me during a certain time frame, because nowadays I can often go
through the mental hoop without needing to actually produce the code.
I would be reluctant to deny others the chance to learn how not to use
it.

Anton
 
A

Alex Martelli

Bob Gailer wrote:
...
The use of zip(seq[1:], [:-1]) to me is more obscure, and

Very obscure indeed (though it's hard to say if it's _more_ obscure without
a clear indication of what to compare it with). Particularly considering
that it's incorrect Python syntax, and the most likely correction gives
probably incorrect semantics, too, if I understand the task (give windows
of 2 items, overlapping by one, on seq?).
memory/cpu-expensive in terms of creating 3 new lists.

Fortunately, Hettinger's splendid itertools module, currently in Python's
standard library, lets you perform this windowing task without creating any
new list whatsoever.

Wen seq is any iterable, all you need is izip(seq, islice(seq, 1, None)),
and you'll be creating no new list whatsoever. Still, tradeoffs in
obscurity (and performance for midsized lists) are quite as clear.


Alex
 
D

Douglas Alan

It's really a matter of taste - both "versions" mean about the same to me
(and to me both mean "get rid of reduce()" ;-) ).

No, my mantra plainly states to keep general and powerful features
over specific, tailored features. reduce() is more general and
powerful than sum(), and would thus clearly be preferred by my
mantra.

The mantra "there should be only one obvious way to do it" apparently
implies that one should remove powerful, general features like
reduce() from the language, and clutter it up instead with lots of
specific, tailored features like overloaded sum() and max(). If so,
clearly this mantra is harmful, and will ultimately result in Python
becoming a bloated language filled up with "one obvious way" to solve
every particular idiom. This would be very bad, and make it less like
Python and more like Perl.

I can already see what's going to happen with sum(): Ultimately,
people will realize that they may want to perform more general types
of sums, using alternate addition operations. (For intance, there may
be a number of different ways that you might add together vectors --
e.g, city block geometry vs. normal geometry. Or you may want to add
together numbers using modular arithmetic, without worrying about
overflowing into bignums.) So, a new feature will be added to sum()
to allow an alternate summing function to be passed into sum(). Then
reduce() will have effectively been put back into the language, only
its name will have been changed, and its interface will have been
changed so that everyone who has taken CS-101 and knows off the top of
their head what reduce() is and does, won't easily be able to find it.

Yes, there are other parts of The Zen of Python that point to the
powerful and general, rather than the clutter of specific and
tailored, but nobody seems to quote them these days, and they surely
are ignoring them when they want to bloat up the language with
unneccessary features like overloaded sum() and max() functions,
rather than to rely on trusty, powerful, and elegant reduce(), which
can easily and lightweightedly do everything that overloaded sum() and
max() can do and quite a bit more.
Never? I doubt this very much. When you want to add two numbers in a
programming language, what's your first impulse? Most likely it is
to write "a + b".

Or b + a. Perhaps we should prevent that, since that makes two
obviously right ways to do it!
Having said that though, part of the appeal of Python is that it hits the
nail on the head surprisingly often: if you don't know (from prior
experience) how to do something in Python, your first guess is very often
correct. Correspondingly, when you read someone else's Python code that uses
some feature you're not familiar with, odds are in your favor that you'll
correctly guess what that feature actually does.

All of this falls out of "clean", "simple", and "elegant".
And that is why I wouldn't be sad if reduce() were to disappear - I don't
use reduce() and _anytime_ I see reduce() in someone's code I have to slow
way down and sort of rehearse in my mind what it's supposed to do and see if
I can successfully interpret its meaning (and, when nobody's looking, I
might even replace it with a for-loop!).

C'mon -- all reduce() is is a generalized sum or product. What's
there to think about? It's as intuitive as can be. And taught in
every CS curiculum. What more does one want out of a function?

|>oug
 
D

David Eppstein

The use of zip(seq[1:], [:-1]) to me is more obscure, and

Very obscure indeed (though it's hard to say if it's _more_ obscure without
a clear indication of what to compare it with). Particularly considering
that it's incorrect Python syntax, and the most likely correction gives
probably incorrect semantics, too, if I understand the task (give windows
of 2 items, overlapping by one, on seq?).
memory/cpu-expensive in terms of creating 3 new lists.

Fortunately, Hettinger's splendid itertools module, currently in Python's
standard library, lets you perform this windowing task without creating any
new list whatsoever.

Wen seq is any iterable, all you need is izip(seq, islice(seq, 1, None)),
and you'll be creating no new list whatsoever. Still, tradeoffs in
obscurity (and performance for midsized lists) are quite as clear.[/QUOTE]

If I'm not mistaken, this is buggy when seq is an iterable, and you need
to do something like
seq1,seq2 = tee(seq)
izip(seq1,islice(seq2,1,None))
instead.
 
J

Jeremy Fincher

Douglas Alan said:
Well, perhaps anything like "only one way to do it" should be removed
from the mantra altogether, since people keep misquoting it in order
to support their position of removing beautiful features like reduce()
from the language.

I don't know what your definition of beautiful is, but reduce is the
equivalent of Haskell's foldl1, a function not even provided by most
of the other functional languages I know. I can't see how someone
could consider it "beautiful" to include a rarely-used and
limited-extent fold and not provide the standard folds.

You want to make Python into a functional language? Write a
functional module. foldl, foldr, etc; basically a copy of the Haskell
List module. That should give you a good start, and then you can use
such facilities to your heart's content.

Me? I love functional programming, but in Python I'd much rather read
a for loop than a reduce or probably even a fold. Horses for courses,
you know?

Jeremy
 
D

Dave Brueck

Of course I am not joking. I see no good coming from the mantra, when
No, my mantra plainly states to keep general and powerful features
over specific, tailored features.

And I disagree that that's necessarily a Good Thing. Good language design is
about finding that balance between general and specific. It's why I'm not a
language designer and it's also why I'm a Python user.
reduce() is more general and
powerful than sum(), and would thus clearly be preferred by my
mantra.

Yes, and eval() would clearly be preferred over them all.
The mantra "there should be only one obvious way to do it" apparently
implies that one should remove powerful, general features like
reduce() from the language, and clutter it up instead with lots of
specific, tailored features like overloaded sum() and max().

I completely disagree - I see no evidence of that. We're looking at the same
data but drawing very different conclusions from it.
I can already see what's going to happen with sum(): Ultimately,
people will realize that they may want to perform more general types
of sums, using alternate addition operations.

Not gonna happen - this _might_ happen if Python was a design-by-committee
language, but it's not.
Yes, there are other parts of The Zen of Python that point to the
powerful and general, rather than the clutter of specific and
tailored, but nobody seems to quote them these days,

'not quoting' != 'not following'
and
'what gets debated on c.l.py' != 'what the Python developers do'
All of this falls out of "clean", "simple", and "elegant".

Not at all - I cut my teeth on 6502 assembly and there is plenty that I
still find clean, simple, and elegant about it, but it's horrible to program
in.
C'mon -- all reduce() is is a generalized sum or product. What's
there to think about? It's as intuitive as can be.

To you, perhaps. Not me, and not a lot of other people. To be honest I don't
really care that it's in the language. I'm not dying to see it get
deprecated or anything, but I do avoid it in my own code because it's
non-obvious to me, and if it were gone then Python would seem a little
cleaner to me.

Obviously what is intuitive to someone is highly subjective - I was really
in favor of adding a conditional operator to Python because to me it _is_
intuitive, clean, powerful, etc. because of my previous use of it in C. As
much as I wanted to have it though, on one level I'm really pleased that a
whole lot of clamoring for it did not result in its addition to the
language. I *like* the fact that there is someone making subjective
judgement calls, even if it means I sometimes don't get my every wish.

A good programming language is not the natural by-product of a series of
purely objective tests.
And taught in
every CS curiculum.

Doubtful, and if it were universally true, it would weaken your point
because many people still find it a foreign or awkward concept. Besides,
whether or not something is taught in a CS program is a really poor reason
for doing anything.

-Dave
 
D

Douglas Alan

'sum' is _way simpler_: _everybody_ understands what it means to sum a
bunch of numbers, _without_ necessarily having studied computer science.

Your claim is silly. sum() is not *way* simpler than reduce(), and
anyone can be explained reduce() in 10 seconds: "reduce() is just like
sum(), only with reduce() you can specify whatever addition function
you would like."
The claim, made by somebody else, that _every_ CS 101 course teaches the
functionality of 'reduce' is not just false, but utterly absurd: 'reduce',
'foldl', and similar higher-order functions, were not taught to me back when
_I_ took my first university exam in CS [it used Fortran as te main
language],

Then you weren't taught Computer Science -- you were taught Fortran
programming. Computer Science teaches general concepts, not specific
languages.
they were not taught to my son in _his_ equivalent course [it used
Pascal], and are not going to be taught to my daughter in _her_
equivalent course [it uses C].

Then your children were done a great diservice by receiving a poor
education. (Assuming that is that they wanted to learn Computer
Science, and not Programming in Pascal or Programming in C.)
Python's purpose is not, and has never been, to maximize the generality
of the constructs it offers.

Whoever said anything about "maximizing generality"? If one's mantra
is "small, clean, simple, general, powerful, elegant", then clearly
there will come times when one must ponder on a trade-off between, for
example, elegant and powerful. But if you end up going and removing
elegant features understood by anyone who has studied Computer Science
because you think your audience is too dumb to make a slight leap from
the specific to the general that can be explained on one simple
sentence, then you are making those trade-off decisions in the
*utterly* wrong manner. You should be assuming that your audience are
the smart people that they are, rather than the idiots you are
assuming them to be.
But not with the parts that I quoted from the "spirit of C", and I
repeat them because they were SO crucial in the success of C as a
lower-level language AND are similarly crucial in the excellence of
Python as a higher-level one -- design principles that are *VERY*
rare among computer languages and systems, by the way:

I sure hope that Python doesn't try to emulate C. It's a terrible,
horrible programming language that held back the world of software
development by at least a decade.
Keep the language small and simple.
Provide only one way to do an operation.

It is not true these principles are rare among computer languages --
they are quite common. Most such language (like most computer
languages in general) just never obtained any wide usage.

The reason for Python's wide acceptance isn't because it is
particularly well-designed compared to other programming languages
that had similar goals of simplicity and minimality (it also isn't
poorly designed compared to any of them -- it is on par with the
better ones) -- the reason for its success is that it was in the right
place at the right time, it had a lightweight implementation, was
well-suited to scripting, and it came with batteries included.

|>oug
 
D

Douglas Alan

Dave Brueck said:
And I disagree that that's necessarily a Good Thing. Good language
design is about finding that balance between general and
specific. It's why I'm not a language designer and it's also why I'm
a Python user.

It's surely the case that there's a balance, but if you assume that
your audience is too stupid to be able to be able to cope with

reduce(add, seq)

instead of

sum(seq)

then you are not finding the proper balance.
Yes, and eval() would clearly be preferred over them all.

And, damned right, eval() should stay in the language!
I completely disagree - I see no evidence of that. We're looking at
the same data but drawing very different conclusions from it.

Well, that's the argument you seem to be making -- that reduce() is
superfluous because a sum() and max() that work on sequences were
added to the language.
Not gonna happen - this _might_ happen if Python was a design-by-committee
language, but it's not.

According to Alex Martelli, max() and min() are likely to be extended
in this fashion. Why not sum() next?
Not at all - I cut my teeth on 6502 assembly and there is plenty
that I still find clean, simple, and elegant about it, but it's
horrible to program in.

I think we can both agree that not all of programming language design can
be crammed into a little mantra!
To you, perhaps. Not me, and not a lot of other people.

Well, perhaps you can explain your confusion to me? What could
possibly be unintuitive about a function that is just like sum(), yet
it allows you to specify the addition operation that you want to use?

Of course, you can get the same effect by defining a class with your
own special __add__ operator, and then encapsulating all the objects
you want to add in this new class, and then using sum(), but that's a
rather high overhead way to accomplish the same thing that reduce()
lets you do very easily.
I *like* the fact that there is someone making subjective
judgement calls, even if it means I sometimes don't get my every wish.
Likewise.

A good programming language is not the natural by-product of a series of
purely objective tests.
Doubtful, and if it were universally true, it would weaken your point
because many people still find it a foreign or awkward concept.

I doubt that anyone who has put any thought into it finds it a foreign
concept. If they do, it's just because they have a knee-jerk reaction
and *want* to find it a foreign concept.

If you can cope with modular arithmetic, you can cope with the idea of
allowing people to sum numbers with their own addition operation!
Besides, whether or not something is taught in a CS program is a
really poor reason for doing anything.

No, it isn't. CS is there for a reason, and one should not ignore the
knowledge it contains. That doesn't mean that one should feel
compelled to repeat the mistakes of history. But fear of that is no
reason not to accept its successes. Those who don't, end up inventing
languages like Perl.

|>oug
 
A

Andrew Dalke

Douglas Alan:
Then you weren't taught Computer Science -- you were taught Fortran
programming. Computer Science teaches general concepts, not specific
languages.

I agree with Alex on this. I got a BS in CS but didn't learn about
lambda, reduce, map, and other aspects of functional programming
until years later, and it still took some effort to understand it.
(Granted,
learning on my own at that point.)

But I well knew what 'sum' did.

Was I not taught Computer Science? I thought I did pretty well
on the theoretical aspects (state machines, automata, discrete math,
algorithms and data structures). Perhaps my school was amiss in
leaving it out of the programming languages course, and for
teaching its courses primarily in Pascal. In any case, it contradicts
your assertion that anyone who has studied CS knows what
reduce does and how its useful.
Then your children were done a great diservice by receiving a poor
education. (Assuming that is that they wanted to learn Computer
Science, and not Programming in Pascal or Programming in C.)

Strangely enough, I didn't see an entry for 'functional programming'
in Knuth's "The Art of Computer Programming" -- but that's just
programming. ;)
But if you end up going and removing
elegant features understood by anyone who has studied Computer Science
because you think your audience is too dumb to make a slight leap from
the specific to the general that can be explained on one simple
sentence, then you are making those trade-off decisions in the
*utterly* wrong manner. You should be assuming that your audience are
the smart people that they are, rather than the idiots you are
assuming them to be.

Your predicate (that it's understood by anyone who has studied
CS) is false so your argument is moot. In addition, I deal with a
lot of people who program but didn't study CS. And I rarely
use reduce in my code (even rarer now that 'sum' exists) so would
not miss its exclusion or its transfer from builtins to a module.

Andrew
(e-mail address removed)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,779
Messages
2,569,606
Members
45,239
Latest member
Alex Young

Latest Threads

Top