reduce() anomaly?

  • Thread starter Stephen C. Waterbury
  • Start date
D

Douglas Alan

If the sequence is carefully randomized, yes. If the sequence has
any semblance of pre-existing order, the timsort is amazingly good
at exploiting it, so that in many real-world cases it DOES run as
O(N).

C'mon -- to make robust programs you have to assume the worst-case
scenario for your data, not the best case. I certainly don't want to
write a program that runs quickly most of the time and then for opaque
reasons slows to a crawl occasionally. I want it to either run
quickly all of the time or run really slowly all of the time (so that
I can then figure out what is wrong and fix it).
Me too! That's why I'd like to make SURE that some benighted soul
cannot code:
onebigstring = reduce(str.__add__, lotsofstrings)

The idea of aiming a language at trying to prevent people from doing
stupid things is just innane, if you ask me. It's not just inane,
it's offensive to my concept of human ability and creativity. Let
people make their mistakes, and then let them learn from them. Make a
programming language to be a fluid and natural medium for expressing
their concepts, not a straight-jacket for citing canon in the
orthodox manner.

Furthermore, by your argument, we have to get rid of loops, since
an obvious way of appending strings is:

result = ""
for s in strings: result += s
Not at all. Maybe you have totally misunderstood "my proposed
extension"?

You are correct, sorry -- I misunderstood your proposed extension.
But max() and min() still have all the same problems as reduce(), and
so does sum(), since the programmer can provide his own comparison
and addition operations in a user-defined class, and therefore, he can
make precisely the same mistakes and abuses with sum(), min(), and
max() that he can with reduce().
So, you're claiming that ALL people who were defending 'reduce' by
posting use cases which DID "abuse this generality" are
unreasonable?

In this small regard, at least, yes. So, here reduce() has granted
them the opportunity to learn from their mistakes and become better
programmers.
However, if you agree with Paul Graham's theories on language
design, you should be consistent, and use Lisp.

I don't agree with *everything* that anyone says. But if there were a
version of Lisp that were as tuned for scripting as Python is, as
portable as Python is, came with as many batteries installed, and had
anywhere near as large a user-base, I probably *would* be using Lisp.
But there isn't, so I don't.

And Python suits me fine. But if it continues to be bloated with a
large number special-purpose features, rather than a small number of
general and expressive features, there may come a time when Python
will no longer suit me.
If you consider Python to be preferable, then there must be some
point on which you disagree with him. In my case, I would put
"simplicity vs generality" issues as the crux of my own
disagreements with Dr. Graham.

Bloating the language with lots of special-purpose features does not
match my idea of simplicity. To the extent that I have succeeded in
this world, it has always been by understanding how to simplify things
by moving to a more general model, meaning that I have to memorize and
understand less. Memorizing lots of detail is not something I am
particurly good at, and is one of the reasons why I dislike Perl so
much. Who can remember all that stuff in Perl? Certainly not I. I
suppose some people can, but this is why *I* prefer Python -- there is
much less to remember.

Apparently you would have it so that for every common task you might
want to do there is one "right" way to do it that you have to
remember, and the language is packed to the brim with special features
to support each of these common tasks. That's not simple! That's a
nightmare of memorizing special cases, and if that were the future of
Python, then that future would be little better than Perl.
Yet you want reduce to keep accepting ANY callable that takes two
arguments as its first argument, differently from APL's / (which does
NOT accept arbitrary functions on its left);

That's because I believe that there should be little distinction
between features built into the language and the features that users
can add to it themselves. This is one of the primary benefits of
object-oriented languages -- they allow the user to add new data types
that are as facile to use as the built-in data types.
and you claimed that reduce could be removed if add, mul, etc, would
accept arbitrary numbers of arguments. This set of stances is not
self-consistent.

Either solution is fine with me. I just don't think that addition
should be placed on a pedestal above other operations. This means that
you have to remember that addition is different from all the other
operations, and then when you want to multiply a bunch of numbers
together, or xor them together, for example, you use a different
idiom, and if you haven't remembered that addition has been placed on
this pedestal, you become frustrated when you can't find the
equivalent of sum() for multiplication or xor in the manual.
sum(sequence) is the obviously right way to sum the numbers that are
the items of sequence. If that maps to add.reduce(sequence), no problem;
nobody in their right mind would claim the latter as "the one obvious
way", exactly because it IS quite un-obvious.

It's quite obvious to me. As is a loop.
The point is that the primary meaning of "reduce" is "diminish", and
when you're summing (positive:) numbers you are not diminishing
anything whatsoever

Of course you are: You are reducing a bunch of numbers down to one
number.
Got any relevant experience teaching Python? I have plenty and I
have never met ANY case of the "trouble" you mention.

Yes, I taught a seminar on Python, and I didn't feel it necessary to
teach either sum() or reduce(). I taught loops, and I feel confident
that by the time a student is ready for sum(), they are ready for
reduce().
I think you're wrong. "reduce dimensionality of a multi-dimensional
array by 1 by operating along one axis" is one such meaning, but there
are many others. For example, the second Google hit for "reduce
function" gives me:

That's a specialized meaning of "reduce" in a specific application
domain, not a function in a general-purpose programming.
where 'reduce' applies to rewriting for multi-dot grammars, and
the 5th hit is

which uses a much more complicated generalization:

It still means the same thing that reduce() typically means. They've
just generalized it further. Some language might generalize sum()
further than you have in Python. That wouldn't mean that it still
didn't mean the same thing.
while http://csdl.computer.org/comp/trans/tp/1993/04/i0364abs.htm
deals with "the derivation of general methods for the L/sub 2/
approximation of signals by polynomial splines" and defines REDUCE
as "prefilter and down-sampler" (which is exactly as I might expect
it to be defined in any language dealing mostly with signal
processing, of course).

Again a specialized domain.
Designing an over-general approach, and "fixing it in the docs" by
telling people not to use 90% of the generality they so obviously
get, is not a fully satisfactory solution. Add in the caveats about
not using reduce(str.__add__, manystrings), etc, and any reasonable
observer would agree that reduce had better be redesigned.

You have to educate people not to do stupid things with loop and sum
too. I can't see this as much of an argument.
Again, I commend APL's approach, also seen with more generality in
Numeric (in APL you're stuck with the existing operator on the left
of + -- in Numeric you can, in theory, write your own ufuncs), as
saner. While not quite as advisable, allowing callables such as
operator.add to take multiple arguments would afford a similarly
_correctly-limited generality_ effect. reduce + a zillion warnings
about not using most of its potential is just an unsatisfactory
combination.

You hardly need a zillion warnings. A couple examples will suffice.

|>oug
 
J

Jay O'Connor

Jumping into this a bit late...

Douglas said:
C'mon -- to make robust programs you have to assume the worst-case
scenario for your data, not the best case. I certainly don't want to
write a program that runs quickly most of the time and then for opaque
reasons slows to a crawl occasionally. I want it to either run
quickly all of the time or run really slowly all of the time (so that
I can then figure out what is wrong and fix it).

I think that's a bit naive. You really need to understand what the
usage pattern of your data is going to be like. If upstream data is
being validity checked by the UI, for exampe, then you can generally
assume that most of the data is valid and you code assuming that most of
the data will be good. If a pathological case happens that slows the
system down but that case is likely to happen only once a month, it may
be worth it. However, if 50% of your data is likely to be bad than
that would be unaccetable.
The idea of aiming a language at trying to prevent people from doing
stupid things is just innane, if you ask me. It's not just inane,
it's offensive to my concept of human ability and creativity. Let
people make their mistakes, and then let them learn from them. Make a
programming language to be a fluid and natural medium for expressing
their concepts, not a straight-jacket for citing canon in the
orthodox manner.
I agree completely. One reason I like Smalltalk and Python is that they
keep track of the mundane book-keeping, but then get out of my way and
don't try to limit me too much.
And Python suits me fine. But if it continues to be bloated with a
large number special-purpose features, rather than a small number of
general and expressive features, there may come a time when Python
will no longer suit me.

This is something I'm watching as well. I use Python where Smalltalk in
inappropriate. One thing I like about Smalltalk is that it has a small
number of powerful general principles that it follows extensively. This
means there are not a lot of suprises and not a lot of 'language rules'
to keep in mind when writing my code or looking at others. What first
attracted me to Python was a lot of the same thing; simple syntax and
some simple concepts used throughout. To me there is a difference
between adding to a language and extending a language along it's own
principles. As Python grows and matures, if it solidifies and extends
along it's natural principles, I will be happy. If it just adds new
stuff on, I won't be as happy. I think list comprehensions, where the
first thing I saw that made me nervous as to how the language was going
to grow.
Bloating the language with lots of special-purpose features does not
match my idea of simplicity. To the extent that I have succeeded in
this world, it has always been by understanding how to simplify things
by moving to a more general model, meaning that I have to memorize and
understand less. Memorizing lots of detail is not something I am
particurly good at, and is one of the reasons why I dislike Perl so
much. Who can remember all that stuff in Perl? Certainly not I. I
suppose some people can, but this is why *I* prefer Python -- there is
much less to remember.

Ditto
 
B

BW Glitch

Douglas said:
C'mon -- to make robust programs you have to assume the worst-case
scenario for your data, not the best case. I certainly don't want to
write a program that runs quickly most of the time and then for opaque
reasons slows to a crawl occasionally. I want it to either run
quickly all of the time or run really slowly all of the time (so that
I can then figure out what is wrong and fix it).

In theory, I'd agree with you Douglas. But IRL, I agree with Alex. If I
have to choose between two algorithms that do almost the same, but one
works on an special case (that's very common to my range) and the other
works in general, I'd go with the special case. There is no compelling
reason for getting into the trouble of a general approach if I can do it
correctly with a simpler, special case.

One example. I once wrote a small program for my graphic calculator to
analyze (rather) simple electrical networks. To make the long story
short, I had two approaches: implement Gaussian Elimination "purely" (no
modifications to take into account some nasty problems) or implement it
with scaled partial pivoting. Sure, the partial pivoting is _much_
better than no pivoting at all, but for the type of electrical networks
I was going to analyze, there was no need to. All the fuzz about
pivoting is to prevent a small value to wreck havoc in the calculations.
In this *specific* case (electric networks with no dependent sources),
it was impossible for this situation to ever happen.

Results? Reliable results in the specific range of working, which is
what I wanted. :D
The idea of aiming a language at trying to prevent people from doing
stupid things is just innane, if you ask me. It's not just inane,
it's offensive to my concept of human ability and creativity. Let
people make their mistakes, and then let them learn from them. Make a
programming language to be a fluid and natural medium for expressing
their concepts, not a straight-jacket for citing canon in the
orthodox manner.

It's not about restraining someone from doing something. It's about
making it possible to *read* the "f(.)+" code. Human ability and
creativity are not comprimised when restrictions are made. In any case,
try to program an MCU (micro-controller unit). Python's restrictions are
nothing compared with what you have to deal in a MCU.
Furthermore, by your argument, we have to get rid of loops, since
an obvious way of appending strings is:

result = ""
for s in strings: result += s

By your logic, fly swatters should be banned because shotguns are more
general. :S It's a matter of design decisions. Whatever the designer
thinks is better, so be it (in this case, GvR).

At least, in my CS introductory class, one of the things we learned was
that programming languages could be extremely easy to read, but very
hard to write and vice versa. These design decisions *must* be taken by
someone *and* they should stick to them. Decision can be changed, but
must be done with rather large quantities of caution.
You are correct, sorry -- I misunderstood your proposed extension.
But max() and min() still have all the same problems as reduce(), and
so does sum(), since the programmer can provide his own comparison
and addition operations in a user-defined class, and therefore, he can
make precisely the same mistakes and abuses with sum(), min(), and
max() that he can with reduce().

It might still be abused, but not as much as reduce(). But considering
the alternative (reduce(), namely), it's *much* better because it shifts
the problem to someone else. We are consenting adults, y'know.
In this small regard, at least, yes. So, here reduce() has granted
them the opportunity to learn from their mistakes and become better
programmers.

Then reduce() shouldn't be as general as it is in the first place.
I don't agree with *everything* that anyone says. But if there were a
version of Lisp that were as tuned for scripting as Python is, as
portable as Python is, came with as many batteries installed, and had
anywhere near as large a user-base, I probably *would* be using Lisp.
But there isn't, so I don't.

And Python suits me fine. But if it continues to be bloated with a
large number special-purpose features, rather than a small number of
general and expressive features, there may come a time when Python
will no longer suit me.

reduce() ... expressive? LOL. I'll grant you it's (over)general, but
expressive? Hardly. It's not a bad thing, but, as Martelli noted, it is
overgeneralized. Not everyone understand the concept off the bat (as you
_love_ to claim) and not everyone find it useful.
Bloating the language with lots of special-purpose features does not
match my idea of simplicity. To the extent that I have succeeded in
this world, it has always been by understanding how to simplify things
by moving to a more general model, meaning that I have to memorize and
understand less. Memorizing lots of detail is not something I am
particurly good at, and is one of the reasons why I dislike Perl so
much. Who can remember all that stuff in Perl? Certainly not I. I
suppose some people can, but this is why *I* prefer Python -- there is
much less to remember.

Then you should understand some of the design decisions reached (I said,
understand, not agree).

What I understand for simplicity is that I should not memorize anything
at all, if I read the code. That's one of the things I absolutely hate
about LISP/Scheme. Especially when it comes to debugging the darn code.
Apparently you would have it so that for every common task you might
want to do there is one "right" way to do it that you have to
remember, and the language is packed to the brim with special features
to support each of these common tasks. That's not simple! That's a
nightmare of memorizing special cases, and if that were the future of
Python, then that future would be little better than Perl.

As I have said, these are design decisions GvR reached or he should be
taking in the future.

Now, what's so hard about sum()? Can't you get what it does by reading
the function name? The hardest part of _any_ software project is not
writing it, but mantaining it. IIRC (and if the figures arent' correct,
please someone correct me), in a complete software project life cycle,
more than 70% of the total budget spent. So you will find that most
companies standarize many things that can be faster/better/"obviously"
done any other way, i.e., x ^= x in C/C++.
That's because I believe that there should be little distinction
between features built into the language and the features that users
can add to it themselves. This is one of the primary benefits of
object-oriented languages -- they allow the user to add new data types
that are as facile to use as the built-in data types.

_Then_ let them *build* new classes to use sum(), min(), max(), etc.
These functionality is better suited for a class/object in an OO
approach anyway, *not* a function.
Either solution is fine with me. I just don't think that addition
should be placed on a pedestal above other operations. This means that
you have to remember that addition is different from all the other
operations, and then when you want to multiply a bunch of numbers
together, or xor them together, for example, you use a different
idiom, and if you haven't remembered that addition has been placed on
this pedestal, you become frustrated when you can't find the
equivalent of sum() for multiplication or xor in the manual.

Have you ever programmed in assembly? It's worth a look...

(In case someone's wondering, addition is the only operation available
in many MPU/MCUs. Multiplication is heavily expensive in those that
support it.)
It's quite obvious to me. As is a loop.

Procecution rests.
Of course you are: You are reducing a bunch of numbers down to one
number.

That make sense if you are in a math related area. But for a layperson,
that is nonsense.
Yes, I taught a seminar on Python, and I didn't feel it necessary to
teach either sum() or reduce(). I taught loops, and I feel confident
that by the time a student is ready for sum(), they are ready for
reduce().

<sarcasm>But why didn't you teached reduce()? If it were so simple, it
was a must in the seminar.</sarcasm>

Now in a more serious note, reduce() is not an easy concept to grasp.
That's why many people don't want it in the language. The middle land
obviously is to reduce the functionality of reduce().
That's a specialized meaning of "reduce" in a specific application
domain, not a function in a general-purpose programming.


It still means the same thing that reduce() typically means. They've
just generalized it further. Some language might generalize sum()
further than you have in Python. That wouldn't mean that it still
didn't mean the same thing.


Again a specialized domain.

?-|

You mention here "general-purpose programming". The other languages that
I have done something more than a small code snippet (C/C++, Java and
PHP) lack a reduce()-like function. And it haven't hurt them by lacking
it. If something like reduce() to be "general", I ~think~ it should be
something in one of the mainstream languages. For example, regex is
"general" in programming languages either because libraries for it have
been added to existing languages (C/C++), or because it has been
incorporated into rising languages (Perl, PHP).
You have to educate people not to do stupid things with loop and sum
too. I can't see this as much of an argument.

After reading the whole message, how do you plan to do *that*? The only
way to effectively do this is by warning the user explicitly *and*
limiting the power the functionality has. As Martelli said, APL and
Numeric does have an equivalent to reduce(), but it's limited to a range
of functions. Doing so ensures that abuse can be contained.

And remember, we are talking about the real world.
You hardly need a zillion warnings. A couple examples will suffice.

I'd rather have the warnings. It's much better than me saying "How
funny, this shouldn't do this..." later. Why? Because you can't predict
what people will actually do. Pretending that most people will act like
you is insane.

Two last things:

1) Do you have any extensive experience with C/C++? (By extensive, I
mean a small-medium to medium project) These languages taught me the
value of -Wall. There are way too many bugs lurking in the warnings to
just ignore them.

2) Do you have any experience in the design process?

--
Andres Rosado

-----BEGIN TF FAN CODE BLOCK-----
G+++ G1 G2+ BW++++ MW++ BM+ Rid+ Arm-- FR+ FW-
#3 D+ ADA N++ W OQP MUSH- BC- CN++ OM P75
-----END TF FAN CODE BLOCK-----

"Greed and self-interest, eh? Excellent! I discern a protege!"
-- Starscream to Blackarachnia, "Possession"
 
D

Douglas Alan

BW Glitch said:
Douglas Alan wrote:
In theory, I'd agree with you Douglas. But IRL, I agree with
Alex. If I have to choose between two algorithms that do almost the
same, but one works on an special case (that's very common to my
range) and the other works in general, I'd go with the special
case. There is no compelling reason for getting into the trouble of
a general approach if I can do it correctly with a simpler, special
case.

When people assert that

reduce(add, seq)

is so much harder to use, read, or understand than

sum(seq)

I find myself incredulous. People are making such claims either
because they are sniffing the fumes of their own righteous argument,
or because they are living on a different planet from me. On my
planet, reduce() is trivial to understand and it often comes in handy.
I find it worrisome that a number of vocal people seem to be living on
another planet (or could use a bit of fresh air), since if they end up
having any significant influence on the future of Python, then, from
where I am standing, Python will be aimed at aliens. While this may
be fine and good for aliens, I really wish to use a language designed
for natives of my world.

Reasonable people on my world typically seem to realize that sum() is
just not useful enough that it belongs being a built-in in a
general-purpose programming language that aims for simplicity. This
is why sum() occurs rather infrequently as a built-in in general
purpose programming languages. sum(), however, should be in the
dictionary as a quintessential example of the word "bloat". If you
agree that sum() should have been added to the language as a built-in,
then you want Python to be a bloated language, whether you think you
do, or not. It is arguable whether reduce() is useful enough that it
belongs as a built-in, but it has many more uses than sum(), and
therefore, there's a case for reduce() being a built-in. reduce() may
carry the weight of its inclusion, but sum() certainly does not.
One example. I once wrote a small program for my graphic calculator to
analyze (rather) simple electrical networks. To make the long story
short, I had two approaches: implement Gaussian Elimination "purely" (no
modifications to take into account some nasty problems) or implement it
with scaled partial pivoting. Sure, the partial pivoting is _much_
better than no pivoting at all, but for the type of electrical networks
I was going to analyze, there was no need to. All the fuzz about
pivoting is to prevent a small value to wreck havoc in the calculations.
In this *specific* case (electric networks with no dependent sources),
it was impossible for this situation to ever happen.
Results? Reliable results in the specific range of working, which is
what I wanted. :D

You weren't designing a general purpose programming language -- you
were designing a tool to meet your own idiosyncratic needs, in which
case, you could have it tap dance and read Esperanto, but only on
second Tuesdays, if you wanted it to, and no one need second guess
you. But that's not the situation that we're talking about.
[Alex Martelli:] Me too! That's why I'd like to make SURE that
some benighted soul cannot code:
onebigstring = reduce(str.__add__, lotsofstrings)
The idea of aiming a language at trying to prevent people from
doing stupid things is just innane, if you ask me. It's not just
inane, it's offensive to my concept of human ability and
creativity. Let people make their mistakes, and then let them
learn from them. Make a programming language to be a fluid and\0
natural medium for expressing their concepts, not a straight-jacket
for citing canon in the orthodox manner.
It's not about restraining someone from doing something. It's about
making it possible to *read* the "f(.)+" code.

A language should make it *easy* to write readable code; it should not
strive to make it *impossible* to write unreadable code. There's no
way a language could succeed at that second goal anyway, and any
serious attemtps to accomplish that impossible goal would make a
langauge less expressive, more bloated, and would probably even end up
making the typical program less readable as a consequence of the
language being less expressive and more complex.
Human ability and creativity are not comprimised when restrictions
are made.

That would depend on the restrictions that you make. Apparently you
would have me not use reduce(), which would compromise my ability to
code in the clear and readable way that I enjoy.
In any case, try to program an MCU (micro-controller unit). Python's
restrictions are nothing compared with what you have to deal in a
MCU.

I'm not sure what point you have in mind. I have programmed MCU's,
but I certainly didn't do so because I felt that it was a particularly
good way to express the software I wished to compose.
By your logic, fly swatters should be banned because shotguns are
more general.

That's not my logic at all. In the core of the language there should
be neither shotguns nor fly-swatters, but rather elegant, orthogonal
tools (like the ability to manufacture interchangable parts according
to specs) that allow one to easily build either shotguns or
fly-swatters. Perhaps you will notice, that neither "shotgun" nor
"fly-swatter" is a non-compound (i.e., built-in) word in the English
language.
:S It's a matter of design decisions. Whatever the
designer thinks is better, so be it (in this case, GvR).

Of course it is a matter of design decisions, but that doesn't imply
that all the of design decisions are being made correctly.
At least, in my CS introductory class, one of the things we learned
was that programming languages could be extremely easy to read, but
very hard to write and vice versa

Or they can be made both easy to read *and* easy to write.
It might still be abused, but not as much as reduce(). But
considering the alternative (reduce(), namely), it's *much* better
because it shifts the problem to someone else. We are consenting
adults, y'know.

Yes, we're consenting adults, and we can all learn to use reduce()
properly, just as we can all learn to use class signatures properly.
Then reduce() shouldn't be as general as it is in the first place.

By that reasoning, you would have to remove all the features from the
language that people often use incorrectly. And that would be, ummm,
*all* of them.
reduce() ... expressive? LOL. I'll grant you it's (over)general, but
expressive? Hardly.

You clearly have something different in mind by the word "expressive",
but I have no idea what. I can and do radily express things that I
wish to do with reduce().
It's not a bad thing, but, as Martelli noted, it is
overgeneralized. Not everyone understand the concept off the bat (as
you _love_ to claim) and not everyone find it useful.

If they don't find it useful, they don't have to use it. Many people
clearly like it, which is why it finds itself in many programming
languages.
What I understand for simplicity is that I should not memorize anything
at all, if I read the code.

There's no way that you could make a programming langauge where you
don't have to memorize anything at all. That's ludicrous. To use any
programming language you have to spend time and effort to develop some
amount of expertise in it -- it's just a question of how much bang you
get for your learning and memorization buck.
That's one of the things I absolutely hate
about LISP/Scheme.

You have to remember a hell of lot more to understand Python code than
you do to understand Scheme code. (This is okay, because Python does
more than Scheme.) You just haven't put in the same kind of effort
into Scheme that you put into Python.

For instance, no one could argue with a straight face that

seq[1:]

is easier to learn, remember, or read than

(tail seq)
Now, what's so hard about sum()?

There's nothing hard about sum() -- it's just unneccessary bloat that
doesn't do enough to deserve being put into the language core. If we
put in sum() in the language core, why not quadratic_formula() and
newtons_method(), and play_chess()? I'd use all of those more
frequently that I would use sum(). The language core should only have
stuff that gives you a lot of bang for the buck. sum() doesn't.
_Then_ let them *build* new classes to use sum(), min(), max(),
etc. These functionality is better suited for a class/object in an
OO approach anyway, *not* a function.

No, making new classes is a heavy-weight kind of thing, and every
class you define in a program also should pay for its own weight.
Requiring the programer to define new classes to do very simple things
is a bad idea.
Have you ever programmed in assembly? It's worth a look...

Yes, I have. And I've written MCU code for microcontrollers that I
designed and built myself out of adders and other TTL chips and lots
of little wires. I've even disassembled programs and patched the
binary to fix bugs for which I didn't have the source code.
(In case someone's wondering, addition is the only operation available
in many MPU/MCUs. Multiplication is heavily expensive in those that
support it.)

If your point is that high-level languages should be like extremely
low level languages, then I don't think that you will find many people
to agree with that.
That make sense if you are in a math related area. But for a layperson,
that is nonsense.

It wasn't nonsene to me when I learned this in tenth grade -- it made
perfect sense. Was I some sort of math expert? Not that I recall,
unless you consider understanding algebra and geometry to be out of
the realm of the layperson.
<sarcasm>But why didn't you teached reduce()? If it were so simple, it
was a must in the seminar.</sarcasm>

I did teach lamda in the seminar and no one seemed to have any
difficulty with it. I really couldn't fit in the entire language in
one three-hour seminar. There are lots of other things in the
language more important than reduce(), but *all* of them are more
important thhan sum().
Now in a more serious note, reduce() is not an easy concept to
grasp. That's why many people don't want it in the language. The
middle land obviously is to reduce the functionality of reduce().

That's silly. Reducing the functionality of reduce() would make it
*harder* to explain, learn, and remember, because it would have to be
described by a bunch of special cases. As it is, reduce() is very
orthogonal and regular, which translates into conceptual simplicity.
You mention here "general-purpose programming". The other languages
that I have done something more than a small code snippet (C/C++, Java
and PHP) lack a reduce()-like function.

Lots of languages don't provide reduce() and lots of languages do. Few
provide sum(). Higher-order functions such as reduce() are
problematic in statically typed langauges such as C, C++, or Java,
which may go a long way towards explaining why none of them include
it. Neither C, C++, or Java provide sum() either, though PHP provides
array_sum(). But PHP has a huge number of built-in functions, and I
don't think that Python wishes to go in that direction.
After reading the whole message, how do you plan to do *that*?

You put a line in the manual saying "As a rule of thumb, reduce()
should only be passed functions that do not have side-effects."
The only way to effectively do this is by warning the user
explicitly *and* limiting the power the functionality has.

Is this the only way to educate people not to abuse loops?

I didn't think so.
As Martelli said, APL and Numeric does have an equivalent to
reduce(), but it's limited to a range of functions. Doing so ensures
that abuse can be contained.

Doing so would make the language harder to document, remember, and
implement.
I'd rather have the warnings. It's much better than me saying "How
funny, this shouldn't do this..." later. Why? Because you can't
predict what people will actually do. Pretending that most people
will act like you is insane.

If the day comes where Python starts telling me that I used reduce()
incorrectly (for code that I have that currently works fine), then
that is the last day that I would ever use Python.
Two last things:
1) Do you have any extensive experience with C/C++? (By extensive, I
mean a small-medium to medium project)

Yes, quite extensive. On large projects, in fact.
These languages taught me the value of -Wall. There are way too many
bugs lurking in the warnings to just ignore them.

I don't use C when I can avoid it because I much prefer C++. C++'s
strong static type-checking is very helpful in eliminating many bugs
before the code will even compile. I find -Wall to be useless in the
C++ compiler I typically use because it will complain about all sorts
of perfectly okay things. But that's okay, because usually once I get
a progam I write in C++ to compile, it typically has very few bugs.
2) Do you have any experience in the design process?

Yes, I design and implement software for a living.

|>oug
 
B

BW Glitch

Douglas said:
When people assert that

reduce(add, seq)

is so much harder to use, read, or understand than

sum(seq)

I find myself incredulous. People are making such claims either
because they are sniffing the fumes of their own righteous argument,
or because they are living on a different planet from me. On my
planet, reduce() is trivial to understand and it often comes in handy.
I find it worrisome that a number of vocal people seem to be living on
another planet (or could use a bit of fresh air), since if they end up
having any significant influence on the future of Python, then, from
where I am standing, Python will be aimed at aliens. While this may
be fine and good for aliens, I really wish to use a language designed
for natives of my world.

My question goes again, what is so difficult about sum()? You say you
find yourself incredulous that people find reduce() much harder than
sum(). It is not hard to see why. reduce() does not tell you anything
about what is about to happen, despite your claims that it is easier. It
doesn't offer any advantages over more readable ways of expressing it,
namely loops.

I'm already sick of you saying that reduce() is trivial. It took me well
over a year to understand what it was for and I have yet to find any
*good* use to it. My background? Started in TI Basic on my own, went to
a course in C, learned C++, Matlab and Java on my own, then learned some
Scheme/Prolog for a course and then I went to Python. I might not have
the best background here (ie, Alex Martelli just outshines me in every
single way), but I know my trade.

As I said before, it seems to me that you only want people with your CS
background to do the programming.
Reasonable people on my world typically seem to realize that sum() is
just not useful enough that it belongs being a built-in in a
general-purpose programming language that aims for simplicity. This
is why sum() occurs rather infrequently as a built-in in general
purpose programming languages. sum(), however, should be in the
dictionary as a quintessential example of the word "bloat". If you
agree that sum() should have been added to the language as a built-in,
then you want Python to be a bloated language, whether you think you
do, or not. It is arguable whether reduce() is useful enough that it
belongs as a built-in, but it has many more uses than sum(), and
therefore, there's a case for reduce() being a built-in. reduce() may
carry the weight of its inclusion, but sum() certainly does not.

The question has come in this thread several times and it is still
unanswered: show a case where reduce() _is_ the obvious way of doing
something. I'm not asking for much, y'know...
You weren't designing a general purpose programming language -- you
were designing a tool to meet your own idiosyncratic needs, in which
case, you could have it tap dance and read Esperanto, but only on
second Tuesdays, if you wanted it to, and no one need second guess
you. But that's not the situation that we're talking about.

Ah, but it is still related, y'know. The problem here was that I _could_
decide to make the app more sturdy by implementing Gaussian Elimination
with scaled partial pivoting because it is the correct way of doing it.
Also, it does happen that the elements in the diagonal in the admittance
matrix is smaller than the sum of the non-diagonal elements of the
corresponding row. This is the reason why scaled partial pivoting is
used in the first place. There were lots of cases that happen in a
circuit with frequency that I decided to ignore because they weren't
relevant for what I needed (namely, dependent sourses).

How does this ties to your statement on preparing for worst case
scenario? Well, this ends up to be a design decision, because it might
be irrelevant or difficult to prepare for it. Preparing something for
worst case scenario is a Good Thing(tm) most of the time, but there are
situations where it's your worst nightmare. If you do program with the
worst case scenario every single time without giving thought to it, fine
by me. But I like to make the desicion on how far should my software go.
[Alex Martelli:] Me too! That's why I'd like to make SURE that
some benighted soul cannot code:
onebigstring = reduce(str.__add__, lotsofstrings)
The idea of aiming a language at trying to prevent people from
doing stupid things is just innane, if you ask me. It's not just
inane, it's offensive to my concept of human ability and
creativity. Let people make their mistakes, and then let them
learn from them. Make a programming language to be a fluid and
natural medium for expressing their concepts, not a straight-jacket
for citing canon in the orthodox manner.

It's not about restraining someone from doing something. It's about
making it possible to *read* the "f(.)+" code.


A language should make it *easy* to write readable code; it should not
strive to make it *impossible* to write unreadable code. There's no
way a language could succeed at that second goal anyway, and any
serious attemtps to accomplish that impossible goal would make a
langauge less expressive, more bloated, and would probably even end up
making the typical program less readable as a consequence of the
language being less expressive and more complex.

IIRC again, readability and writability are in opposites. If you design
a language to be highly readable, you're going to have a hell of a time
writing it (ie, AppleScript, for example; note: it isn't that bad, just
too wordy). Designing a language with high writability goes the other
way (ie, Perl when used everything). So it's impossible to have it both
ways, readable and writable.
That would depend on the restrictions that you make. Apparently you
would have me not use reduce(), which would compromise my ability to
code in the clear and readable way that I enjoy.

Did you missed reduce() in C/C++/Assembly?
I'm not sure what point you have in mind. I have programmed MCU's,
but I certainly didn't do so because I felt that it was a particularly
good way to express the software I wished to compose.

My point is that you are complaining about Python's restrictions. In a
MCU, restrictions come in various ways.

And there are times where programming an MCU is the best way to solve a
problem.
That's not my logic at all. In the core of the language there should
be neither shotguns nor fly-swatters, but rather elegant, orthogonal
tools (like the ability to manufacture interchangable parts according
to specs) that allow one to easily build either shotguns or
fly-swatters. Perhaps you will notice, that neither "shotgun" nor
"fly-swatter" is a non-compound (i.e., built-in) word in the English
language.

And which tool do you propose instead of shotguns and fly swatters?
Of course it is a matter of design decisions, but that doesn't imply
that all the of design decisions are being made correctly.

Over all the time I've been doing design, I've learned that there's no
bad design decision if it was reached in a logical way. Now, doing
something just because is a bad design decision.
Or they can be made both easy to read *and* easy to write.

That's impossible, my friend.
Yes, we're consenting adults, and we can all learn to use reduce()
properly, just as we can all learn to use class signatures properly.
Sheez.



By that reasoning, you would have to remove all the features from the
language that people often use incorrectly. And that would be, ummm,
*all* of them.

My time to say I don't see your point. reduce() is way too general for
its own good. Almost every other feature in the language is controlled
in different ways, allowing flexibility and reducing side effects.
You clearly have something different in mind by the word "expressive",
but I have no idea what. I can and do readily express things that I
wish to do with reduce().

For me, expressive is describing something that can other people can
understand. The more expressive something is, the less "aftermath"
explaining is needed. In out example at hand, if I have to explain one
function everytime to everyone, and another just once in a while, the
latter is more expressive than the former.
If they don't find it useful, they don't have to use it. Many people
clearly like it, which is why it finds itself in many programming
languages.

Please name mainstream languages that do have reduce().
There's no way that you could make a programming langauge where you
don't have to memorize anything at all. That's ludicrous. To use any
programming language you have to spend time and effort to develop some
amount of expertise in it -- it's just a question of how much bang you
get for your learning and memorization buck.

Obviously, some memorizing is needed. But I don't want to memorize every
single feature just to know what a code is doing.
You have to remember a hell of lot more to understand Python code than
you do to understand Scheme code. (This is okay, because Python does
more than Scheme.) You just haven't put in the same kind of effort
into Scheme that you put into Python.

I tried, but the damn parenthesis just got in the way. Getting something
as simple as adding an array the old fashioned way was painful.
For instance, no one could argue with a straight face that

seq[1:]

is easier to learn, remember, or read than

(tail seq)

And I won't.
There's nothing hard about sum() -- it's just unneccessary bloat that
doesn't do enough to deserve being put into the language core. If we
put in sum() in the language core, why not quadratic_formula() and
newtons_method(), and play_chess()? I'd use all of those more
frequently that I would use sum(). The language core should only have
stuff that gives you a lot of bang for the buck. sum() doesn't.

Not necessary. The ternary operator is quite specific (unless you start
abusing it) to the situation and it is included in many languages. So
the rule of the thumb generally is to put in the core what makes sense.
Not all the general stuff, but what is useful. reduce() is everything
*but* useful.

It would be interesting to hear why sum() was included in the core, though.
No, making new classes is a heavy-weight kind of thing, and every
class you define in a program also should pay for its own weight.
Requiring the programer to define new classes to do very simple things
is a bad idea.


sum(), min() and max() are "simple" things in the abstract. However,
they depend on the object. So, in an OO language, these are better
modeled _inside_ an object, which in Python's case is a class.
Yes, I have. And I've written MCU code for microcontrollers that I
designed and built myself out of adders and other TTL chips and lots
of little wires. I've even disassembled programs and patched the
binary to fix bugs for which I didn't have the source code.

I think the reason addition is "above" all other operations is because
every other operation can be defined with addition. But only
substraction can be used to define addition, if the latter is not defined.
If your point is that high-level languages should be like extremely
low level languages, then I don't think that you will find many people
to agree with that.

My point is that addition can be used to define the other operations.
It wasn't nonsene to me when I learned this in tenth grade -- it made
perfect sense. Was I some sort of math expert? Not that I recall,
unless you consider understanding algebra and geometry to be out of
the realm of the layperson.

I am not talking about you, I'm talking about the average person.
I did teach lamda in the seminar and no one seemed to have any
difficulty with it. I really couldn't fit in the entire language in
one three-hour seminar. There are lots of other things in the
language more important than reduce(), but *all* of them are more
important thhan sum().




That's silly. Reducing the functionality of reduce() would make it
*harder* to explain, learn, and remember, because it would have to be
described by a bunch of special cases. As it is, reduce() is very
orthogonal and regular, which translates into conceptual simplicity.

Not at all. If the functionality of reduce() is, well, reduced, it would
make it easier to explain, as it would have some real connotation. As
is, reduce() serves no purpose.
Lots of languages don't provide reduce() and lots of languages do.

Either lots of languages do, or lots of languages don't provide
reduce(). This is a mutually exclusive.
Few
provide sum().

I agree. But its functionality can be easily implemented in most languages.
Higher-order functions such as reduce() are
problematic in statically typed langauges such as C, C++, or Java,
which may go a long way towards explaining why none of them include
it. Neither C, C++, or Java provide sum() either, though PHP provides
array_sum().

How difficult is it to implement the sum() functionality in either of
these languages? How difficult is it to implement the reduce functionality?
But PHP has a huge number of built-in functions, and I
don't think that Python wishes to go in that direction.

No, that's not the direction Python should take. However, built-in
functions should be things that make sense (no reduce()). Or are you
going to argue to eliminate the print statement, like in C?
You put a line in the manual saying "As a rule of thumb, reduce()
should only be passed functions that do not have side-effects."

That would work in an ideal world. Please come to the real world.
Is this the only way to educate people not to abuse loops?

I didn't think so.

Me neither. But what other option is there for overgeneralized functions?
Doing so would make the language harder to document, remember, and
implement.

Not in my book. Limiting the power of reduce() will actually make it
more useful.
If the day comes where Python starts telling me that I used reduce()
incorrectly (for code that I have that currently works fine), then
that is the last day that I would ever use Python.

Very well. Your decision.
Yes, quite extensive. On large projects, in fact.




I don't use C when I can avoid it because I much prefer C++. C++'s
strong static type-checking is very helpful in eliminating many bugs
before the code will even compile. I find -Wall to be useless in the
C++ compiler I typically use because it will complain about all sorts
of perfectly okay things. But that's okay, because usually once I get
a progam I write in C++ to compile, it typically has very few bugs.

Strange. In my case, the -Wall saved me from a *lot* of mistakes in that
otherwise would have taken me a lot of time to discover.
Yes, I design and implement software for a living.

I was under the impression you didn't had any. And I still do.

--
Glitch

-----BEGIN TF FAN CODE BLOCK-----
G+++ G1 G2+ BW++++ MW++ BM+ Rid+ Arm-- FR+ FW-
#3 D+ ADA N++ W OQP MUSH- BC- CN++ OM P75
-----END TF FAN CODE BLOCK-----

Strike when the enemy isn't looking.
-- Skywarp (G1)
 
D

Douglas Alan

BW Glitch said:
My question goes again, what is so difficult about sum()?

Nothing is difficult about sum(). My point is not that it is
difficult, but rather that is unnecessary bloat, and I don't want to
see Python become a bloated languages.
You say you find yourself incredulous that people find reduce() much
harder than sum(). It is not hard to see why. reduce() does not tell
you anything about what is about to happen, despite your claims that
it is easier.

I didn't say that reduce() is easier. I said that it is perfectly
easy and more useful than sum(), and therefore the criticism of bloat
does not necessarily apply (no pun intended) to it, since it useful
enough, arguably, to not be bloat.
It doesn't offer any advantages over more readable ways of
expressing it, namely loops.

Sure it does. Loops are not particulary easy to read, while reduce()
is perfectly easy to read.

Let's assume that we already understand the function longer, given by

def longer(a, b):
if len(b) > len(a): return b
else: return a

Now consider:

(1) apply(longer, seq)

vs.

(2) longest = ""
for s in seq: longest = longer(acc, s)


Number 1, I can just read, like I can read English. Number 2, I have
to stop and think about. The reason for this is that there are only
three elements of meaning in the first case, and they compose
naturally, while in the second case there are seven elements of
meaning, three of them repeated twice, making ten elements of meaning.
This is too many elements of meaning composed in a rather complicated
way to just be able to read. It requires additional thought in
addition to the reading.
I'm already sick of you saying that reduce() is trivial.

Well, it is.
It took me well over a year to understand what it was for

That's, no doubt, merely because the documentation for reduce() in the
Python manual is not very good. It does nothing to tell you what
kinds of things you might use it for.
and I have yet to find any *good* use to it.

You already know of one: for summing numbers before sum() was
unfortunately added to the language.

In addition to finding the longest string in a sequence, I've given
you a number of other good uses in previous messages: multiplying
numbers, subtracting numbers, xoring bits (good for use as a very easy
checksum), and'ing bits, or'ing bits, and'ing a list of booleans,
or'ing a list of booleans,

Personally, I use it mostly when doing the kind of thing demonstrated
above by apply(longer, seq). I often wish to find the most
{something} element of a sequence, and use apply to find it.
As I said before, it seems to me that you only want people with your
CS background to do the programming.

I had no background in CS when I learned apply() in a minute or two
when I was in tenth grade. All I had was an APL manual, which
explained well in a few sentences (via a few examples) what apply() is
good for.
The question has come in this thread several times and it is still
unanswered: show a case where reduce() _is_ the obvious way of doing
something. I'm not asking for much, y'know...

I've given on several occassions a number of perfectly good uses for
reduce().

Regarding "*the* obvious way" -- I've said before that there is rarely
*one* obvious way to do anything, though there may several obvious
ways. Those who think there is only one obvious way are too full of
their own way of thinking.
Ah, but it is still related, y'know. The problem here was that I
_could_ decide to make the app more sturdy by implementing Gaussian
Elimination with scaled partial pivoting because it is the correct way\0
of doing it.

The issue is not related. There is no reason to make something more
general if no one is ever going to use that extra generality, so you
had no good reason to make your app more general, since *you* didn't
need that extra generality, and *you* were the target audience for
your app. However, Python is a general purpose programming language
where you can't make many assumptions about what people will and won't
need to do with it. People can and do use reduce() to good effect.
And there is no good reason to put sum() into the core of Python
because it just isn't useful enough to be there.
How does this ties to your statement on preparing for worst case
scenario? Well, this ends up to be a design decision, because it
might be irrelevant or difficult to prepare for it. Preparing
something for worst case scenario is a Good Thing(tm) most of the
time, but there are situations where it's your worst nightmare. If
you do program with the worst case scenario every single time
without giving thought to it, fine by me. But I like to make the
desicion on how far should my software go.

I program for the worst case scenario that is important to handle
gracefully. (For instance, the filesystem is full with an app
that is meant for production use.) I don't worry about handling worst
case scenarios that are not important to handle gracefully. (E.g., an
atom bomb has been detonated across the street.) Knowing the
difference is an art, but if you think that a general-purpose max()
routine should not handle gracefully the situation where the input
data is randomly ordered, then I maintain that you are not familiar
enough with your art.
IIRC again, readability and writability are in opposites.

No they are not.

There may be some tension between the two at times, but they *nothing*
like opposites.
If you design a language to be highly readable, you're going to have
a hell of a time writing it (ie, AppleScript, for example; note: it
isn't that bad, just too wordy).

Python is both highly readable and highly writable.
Designing a language with high writability goes the other way (ie,
Perl when used everything). So it's impossible to have it both ways,
readable and writable.

To me, Perl is neither readable nor writable.
Did you missed reduce() in C/C++/Assembly?

At times. I coped.

Btw, C++ has reduce() in its STL, only it is called accumulate(). You
can easily find it, however, because the manual index lists reduce()
and points you at accumulate(). I'm not able to use accumulate(),
however, in my C++ code, because we use a C++ compiler that doesn't
have STL.

This is the example from *The C++ Programming Language* for
accumulate()/reduce() translated into Python:

inventory = ( # Description, Unit Price, Number of Units
("blue table", 50.36, 10),
("red chair", 20.42, 8),
("green vase", 7.99, 5),
("green Lambroghini", 120000, 1),
)

def valueInventory(totalSoFar, record):
return totalSoFar + record[1] * record[2]

totalValueOfInventory = reduce(valueInventory, inventory, 0)

C++'s STL has no sum() function, btw, that I am aware of, though it
does have a sum() method on valarray, which is fine by me.
My point is that you are complaining about Python's restrictions. In a
MCU, restrictions come in various ways.
And there are times where programming an MCU is the best way to
solve a problem.

(1) I never complained about Python's restrictions. I'm complaining
about bloating the language with minimally useful features like sum().
Or the misguided idea that ill-conceived restrictions *should* be put
into Python. Fortunately, such ones have not been put into it.

(2) The idea that Python should be made in any way like an MCU is
crazy.
And which tool do you propose instead of shotguns and fly swatters?

I already said: equipment for manufacturing interchangable parts,
from which either fly swatters or shotguns can be contructed.

And as I pointed out, in English, there is no native concept for
fly-swatter or shot-gun. There is the concept of fly and the concept
of swatter, and the notion of gun and the notion of shot, and both
"fly-swatter" and "shotgun" are built out of these concepts. This is
a very good thing. We hardly need to bloat up the English vocabulary
with special words for fly-swatters or shotguns.
Over all the time I've been doing design, I've learned that there's no
bad design decision if it was reached in a logical way. Now, doing
something just because is a bad design decision.

That's utterly absurd. The road to hell is paved with good intention.
There's a reason for that aphorism -- it's quite true! I can't even
begin to count the numbers of times I've seen problematic designs come
out of reasonable assumptions and plausible reasoning, but often,
something important is overlooked along the way, and that something
comes back to bite you later.
That's impossible, my friend.

If you think so, then you must not use Python very much, my friend.
My time to say I don't see your point. reduce() is way too general for
its own good. Almost every other feature in the language is controlled
in different ways, allowing flexibility and reducing side effects.

reduce() is not too general for it's own good. It is just right.

Your argument for reduce() being too general is that some people have
misused it. That argument is unsound, since people have misused, at
one time or another, nearly every feature of Python. The logical
consequence of your reasoning is that all features of Python are too
general for their own good. Reductio ad absurdum.
For me, expressive is describing something that can other people can
understand.

That's not what "expressive" means. Expressive means that you can
naturally translate your thoughts into the medium, not necessarily
that the result is easy to understand. Shakespeare, for instance, was
expressive, but perhaps not particularly accessible, at least to
most speakers of modern English.

Expressivity and accessibility are orthogonal issues, but they need
not always be at odds, as Python often proves.
Please name mainstream languages that do have reduce().

Python and C++. OCaml gets a bunch of attention these days. I don't
know whether you would consider it mainstream, but it has it, though I
think it might call it foldl(). APL and Lisp are two other popular
languages that have it. The GNU implementation of Java has it.
That's just off of the top of my hea.
Obviously, some memorizing is needed. But I don't want to memorize
every single feature just to know what a code is doing.

That's why you should keep the number of features built into the
language small, powerful, general, and orthogonal. sum() doesn't cut
the mustard.
I tried, but the damn parenthesis just got in the way. Getting
something as simple as adding an array the old fashioned way was
painful.

The argument that Lisp is bad because of its parentheses is an
argument that pains me. That makes no more sense than the
all-to-frequent argument that Python is bad because doing nesting by
indentation is evil.

If you ask me, these are not flaws with the respective languages, but
with the proponents of these arguments. In both cases, the proponent
is projecting their own resistance to learning a different reasonable
approach onto the language. My advice to such people is to learn to
overcome your resistance. Life is richer when you open yourself to
new ideas.
Not necessary. The ternary operator is quite specific (unless you
start abusing it) to the situation and it is included in many
languages.

The ternary operator is anything but specific! It is a general
purpose conditional statement. You will notice that Guido wishes to
make it even more general by adding "elif" clauses.
So the rule of the thumb generally is to put in the core what makes
sense. Not all the general stuff, but what is useful.

Only that which has a lot of uses should be put into the core.
reduce() is everything *but* useful.

As I have pointed out repeatedly, many people, including me, find
reduce() to be very useful.
It would be interesting to hear why sum() was included in the core, though.

Guido woke up on the wrong side of the bed that one day.
sum(), min() and max() are "simple" things in the abstract. However,
they depend on the object. So, in an OO language, these are better
modeled _inside_ an object, which in Python's case is a class.

(1) One shouldn't be forced to program in an OO style in Python.

(2) Often there are a number of different ways to sum together a class of
objects. Defining a different class for each addition operator would
be extremely ugly.
I think the reason addition is "above" all other operations is
because every other operation can be defined with addition. But only
substraction can be used to define addition, if the latter is not
defined.

This has no relevance to the topic at hand. It would be extremely
silly to try to do the equivalent of

reduce(mul, seq)

using addition.

Probabilities, for example, "add" by multiplying them together, so if
you wish to combine numbers representing probabilities, you can't use
sum(), unless you define a class. But (1) you shouldn't have to. (2)
Calling the method that combines probabilities, "__add__", so you can
use sum() on the probabilities would be inadvisable.
My point is that addition can be used to define the other operations.
So?
I am not talking about you, I'm talking about the average person.

First of all, who says I'm not the average person? Secondly, Python
should be aimed for smart people, not "average" people. Programming
languages aimed at "average" people, like Cobol and Visual Basic and
Pascal are *terrible* programming languages. (Versions of Pascal that
have been significantly extended might be okay.)
Not at all. If the functionality of reduce() is, well, reduced, it
would make it easier to explain, as it would have some real
connotation. As is, reduce() serves no purpose.

(1) You keep asserting obvious falsities like "reduce() serves no
purpose". Many people find it very useful and a number of people who
are much smarter than either you or I put it into their programming
languages because it *is* useful. If you insist on holding onto
absurdities, then your conclusions will also be absurd.

(2) People learn to generalize very well and correctly from a small
number of examples. (This is a fact of human cognition, which you may
attempt to disagree with, but if you do then you disagree with
well-researched facts from Cognitive Psychology.) If the general case
is not natural, however, then people won't learn the generalization
properly from the examples. By special-casing reduce(), you would
make it harder to learn and memorize, because the natural
generalization from a small set of examples would be broken. Then one
would have to learn the exceptions and restrictions. So, not only
would it be less useful, it would be harder to learn and remember. A
lose/lose all around.
Either lots of languages do, or lots of languages don't provide
reduce(). This is a mutually exclusive.

If I understand you correctly, you need to learn some elementary
logic. The proposition, "Lots of languages provide reduce()" is
perfectly conisistent with the proposition, "Lots of languages do not
provide reduce()". Just like the proposition "Lots of people love
Christmas" is consistent with the proposition "Lots of people hate
Christmas". If you can't see this, take a nap, have some food to get
nutrition to your brain, and think on it again.
I agree. But its functionality can be easily implemented in most languages.

So? All the more reason to not put it into the core of the language.
reduce() can be easily implemented in almost all modern dynamic
languages too, which is an argument, actually, that it doesn't need to
be in the core. But then again, I never said that it did.
How difficult is it to implement the sum() functionality in either
of these languages? How difficult is it to implement the reduce
functionality?

In Python, both sum() and reduce() are trivial to implement. In C and
Java, it is impossible to implement sum() because neither language
would know how to call the right add operator for the data type being
summed. (Well, maybe you could do it in Java by using its
introspection abilities, but I'm willing to bet that it wouldn't be
pretty). Alternatively, in C and Java, you could implement reduce(),
but it would be rather combersome to use because you would have to
cast the result back to the correct type. Also, the function passed
to reduce() would have to be specially coded to take void*'s or Object
objects, and then cast them to the correct type for that function.
All this would make reduce() rather combersome to use and type unsafe.
For these reasons, the advantages you would see from reduce() in these
languages wouldn't be worth the pain, and consequently, this is why
reduce() is rare in statically typed languages.

In C++, you can easily implement both reduce() and sum() using
templates. (And, in fact, reduce() is part of STL.) This is a good
example of why statically typed languages should have paramaterized
types. Unfortunately, most don't.
No, that's not the direction Python should take. However, built-in
functions should be things that make sense (no reduce()). Or are you
going to argue to eliminate the print statement, like in C?

No. Why would I want to do that? Print, unlike sum(), is used
extremely often.
That would work in an ideal world. Please come to the real world.

In the real world, people make all sorts of stupid decisions when
programming. Languages that try to prevent this are not good,
practical, programming languages. Don't take my word for it -- try
programming in Euclid, which had this very aim.
Me neither. But what other option is there for overgeneralized functions?

You keep ignoring the point. So what if you can abuse reduce() -- you
can abuse loops. We shouldn't try to figure out every way that a loop
might be abused and then have the language try to warn you if you do
that. If you did so, you would also prevent perfectly good usages of
loops. Instead, you should making the looping constructs be simple,
elegant, and general, and rely on training to teach people how to use
loops properly.

The very same thing is true for *any* construct in the language,
including reduce().
Very well. Your decision.

It wouldn't be just my decision. I guarantee you that many, many
people would feel precisely the same way. Perhaps enough to kill off
Python as a viable language.
Strange. In my case, the -Wall saved me from a *lot* of mistakes in
that otherwise would have taken me a lot of time to discover.

Then use it if you like. The nice thing about -Wall is that it is
*optional*. Apparently, you would have it be mandatory. But in that
case, you would never be able to do *anything* that -Wall complains
about. In the compiler I use, that would rule out just about every
program I have ever seen that does anything significant.
I was under the impression you didn't had any. And I still do.

At least I understand elementary logic.

|>oug
 
B

BW Glitch

[snip]
Welcome to my killfile. And stick with LISP.

--
Glitch

-----BEGIN TF FAN CODE BLOCK-----
G+++ G1 G2+ BW++++ MW++ BM+ Rid+ Arm-- FR+ FW-
#3 D+ ADA N++ W OQP MUSH- BC- CN++ OM P75
-----END TF FAN CODE BLOCK-----

Fight for the good of all.
-- Powermaster Optimus Prime with Apex Armor
 
D

Dave Benjamin

Lots of languages don't provide reduce() and lots of languages do. Few
provide sum(). Higher-order functions such as reduce() are
problematic in statically typed langauges such as C, C++, or Java,
which may go a long way towards explaining why none of them include
it. Neither C, C++, or Java provide sum() either, though PHP provides
array_sum(). But PHP has a huge number of built-in functions, and I
don't think that Python wishes to go in that direction.

Not that this contributes much to either side of this argument, but I just
want to mention that PHP *does* have a reduce function:

http://www.php.net/array_reduce

However, this requires pass-by-name and PHP has no support for closures, so
its usefulness is somewhat limited. PHP also has array_map() and
array_filter(), which do what you might expect...

Ducking out now,
Dave =)
 
D

Douglas Alan

BW Glitch said:
[snip]
Welcome to my killfile. And stick with LISP.

Well, that's just about as eloquent and well-argued as anything you've
posted on the topic so far.

|>oug
 
A

Arthur

When people assert that

reduce(add, seq)

is so much harder to use, read, or understand than

sum(seq)

I find myself incredulous. People are making such claims either
because they are sniffing the fumes of their own righteous argument,
or because they are living on a different planet from me. On my
planet, reduce() is trivial to understand and it often comes in handy.
I find it worrisome that a number of vocal people seem to be living on
another planet (or could use a bit of fresh air), since if they end up
having any significant influence on the future of Python, then, from
where I am standing, Python will be aimed at aliens. While this may
be fine and good for aliens, I really wish to use a language designed
for natives of my world.

The dynamics here are indeed sad.

When an MIT guy says stuff like this, it is discountable because he is
an MIT guy. What is trivial to him...

When a guy like myself, with a degree in English and an MBA says
essentially the same thing - it is more than discounted. It is
persumptuos, almost, to be participating.

The silliness of the converstations here, about what "I" of course can
understand, but cannot expect others to grasp easily have been indeed
a near downfall, in my eyes. At times, as you say, a broad and
depressing insult seems to be eminating from those discussions.

I've never noticed much insight in those discussion, and none have
served the practical decision making process in connection with the
future of Python well, at all.

I would probably include the "sum" decision in the mix.

Art
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top