Verbose and flexible args and kwargs syntax

E

Eelco

On Mon, 12 Dec 2011 04:21:15 -0800, Eelco wrote:
No more, or less, explicit than the difference between "==" and
"is".
== may be taken to mean identity comparison; 'equals' can only mean
one thing.
[...]
We are not talking mathemathics, we are talking programming languages.

What *I* am talking about is your assertion that there is only one
possible meaning for "equals" in the context of a programing language.
This is *simply not correct*.

You don't have to believe me, just look at the facts. It's hard to find
languages that use the word "equals" (or very close to it) rather than
equals signs, but here are four languages which do:

That python is not the only language to not get this quite as right as
could be is known to me. But within computer science as a discipline,
equality and identity comparisons have a clear enough meaning. 'is'
has a narrower meaning than 'equals'. '==' has no meaning whatsoever
in computer science.
Again, all this goes to demonstrate that the language designer is free to
choose any behaviour they like, and give it any name they like.

Certainly you demonstrated as much. Programming languages are created
by people, and they have a tendency to make mistakes, and to have to
compromise (backwards compatibility, and so on). Thats a seperate
matter from 'what ought to be done', when discussing optimal language
design.
No. Things which are obscure are used in language infrequently, because
if they were common they would not be obscure. But things which are used
infrequently are not necessarily obscure.

An example in common language: "Napoleon Bonaparte" does not come up in
conversation very frequently, but he is not an obscure historical figure.

An example from programming: very few people need to use the
trigonometric functions sin, cos, tan in their code. But they are not
obscure functions: most people remember them from school. People who have
forgotten almost everything about mathematics except basic arithmetic
probably remember sin, cos and tan. But they never use them.

I dont think its terribly interesting to debate whether the term
obscure applies to trigonometric functions or not: the important
matter is that they are where they should be; under math.cos, etc.
They dont have their own special character, and I hope you agree that
is as it should be.

I use trig far more often than modulus, so that argues in favor of
modulus being under math too; infact I used modulus quite recently,
but naturally it was in a piece of code that should be done in C
eventually anyway (evaluating subdivision surfaces)
Because you asked why Python uses the % operator for remainder.

So you ARE implying python has backwards compatibility with C as a
design goal? Otherwise the given answer to this question is
nonsensical.
[...]
They are bad ideas because they truely do not lead to the execution of
different code, but are merely a reordering, mixing statements in with a
function declaration. I am proposing no such thing; again, the type(arg)
notation I have dropped, and never was meant to have anything to do with
function calling; it is a way of supplying an optional type constraint,
so in analogy with function annotations, I changed that to arg::type.
Again, this has nothing to do with calling functions on arguments.

You have not thought about this carefully enough. Consider what happens
when this code gets called:

def f(*args): pass

f(a, b, c)

The Python virtual machine (interpreter, if you prefer) must take three
arguments a, b, c and create a tuple from them. This must happen at
runtime, because the value of the objects is not known at compile time.
So at some point between f(a, b, c) being called and the body of f being
entered, a tuple must be created, and the values of a, b, c must be
collated into a single tuple.

Now extend this reasoning to your proposal:

def f(args:FOO): pass

At runtime, the supplied arguments must be collated into a FOO, whatever
FOO happens to be. Hence, the function that creates FOO objects must be
called before the body of f can be entered. This doesn't happen for free.
Whether you do it manually, or have the Python interpreter do it, it
still needs to be done.

Of course the python interpreted needs to do this; and in case non-
builtin types are allowed, the mechanism is going to be through their
constructor. But thats a detail; the syntax doesnt say: 'please call
this constructor for me', any more than **kwargs says 'please call a
dict constructor for me', even though equivalent operations are
obviously going on under the hood as part of the process. Yes,
whatever type you have needs to be constructed, and its not magically
going to happen, its going to cost CPU cycles. The question is, do you
end up constructing both a tuple and a list, or do you construct the
list directly?

But you're not talking about type constraints. You're not instructing the
function to reject arguments which have the wrong type, you are
instructing it to collate multiple arguments into a list (instead of a
tuple like Python currently does). def f(*args) *constructs* a tuple, it
doesn't perform a type-check.

I am talking about type constraints, but as seems to be the usual
pattern in our miscommunications, you seek to attach a specific far
fetched meaning to it that I never intended. Insofar as I understand
these terms, a type-constraint is part of a declaration; float x; in C-
family languages, args::list in python 4 perhaps. It narrows down the
semantics for further usage of that symbol. A type-check is something
along the lines of type(args)==list, a runtime thing and something
completely different. I havnt mentioned the latter at all, explicitly
or implicitly, as far as im aware.
 
C

Chris Angelico

I am talking about type constraints... A type-check is something
along the lines of type(args)==list, a runtime thing and something
completely different. I havnt mentioned the latter at all, explicitly
or implicitly, as far as im aware.

I'm not sure what you mean by a "type constraint". Here's how I understand such:

float|int foobar; //Declare that the variable 'foobar' is allowed to
hold a float or an int
foobar = 3.2; //Legal.
foobar = 1<<200; //Also legal (a rather large integer value)
foobar = "hello"; //Not legal

Python doesn't have any such thing (at least, not in-built). Any name
may be bound to any value - or if you like, any variable can contain
any type. Same applies to argument passing - you can even take an
unbound method and call it with some completely different type of
object as its first parameter. (I can't think of ANY situation in
which this would not be insanely confusing, tbh.) When you gather a
function's arguments into a tuple, list, or any other such container
object, what you're doing is constructing another object and stuffing
it with references to the original arguments. That involves building a
new object, where otherwise you simply bind the arguments' names to
the original objects directly.

Now, it is perfectly conceivable to have designed Python to _always_
pass tuples around. Instead of a set of arguments, you just always
pass exactly one tuple to a function, and that function figures out
what to do with the args internally. If that's the way the language is
built, then it is the _caller_, not the callee, who builds that tuple;
and we still have the same consideration if we want a list instead.

So suppose you can have a user-defined object type instead of
list/dict. How are you going to write that type's __init__ function?
Somewhere along the way, you need to take a variable number of
arguments and bundle them up into a single one... so somewhere, you
need the interpreter to build it for you. This is going to end up
exactly the same as just accepting the tuple and then passing that to
a constructor, like the list example. Keep things transparent and you
make debugging a LOT easier.

ChrisA
 
E

Eelco

I'm not sure what you mean by a "type constraint". Here's how I understand such:

float|int foobar; //Declare that the variable 'foobar' is allowed to
hold a float or an int
foobar = 3.2; //Legal.
foobar = 1<<200; //Also legal (a rather large integer value)
foobar = "hello"; //Not legal

Python doesn't have any such thing (at least, not in-built).

Agreed on what a type constraint is, and that python does not really
have any, unless one counts the current use of asterikses, which are
infact a limited form of type constrait (*tail is not just any object,
but one holding a list, which modifies the semantics of the assignment
statement 'head,*tail=sequence' from a regular tuple unpacking to a
specific form of the more general collection unpacking syntax);

The idea is to enrich this syntax; to add optional and limited type
constraints to python, specifically to enrich collection packing/
unpacking syntax, but perhaps the concept can be further generalized.

So suppose you can have a user-defined object type instead of
list/dict. How are you going to write that type's __init__ function?
Somewhere along the way, you need to take a variable number of
arguments and bundle them up into a single one... so somewhere, you
need the interpreter to build it for you. This is going to end up
exactly the same as just accepting the tuple and then passing that to
a constructor, like the list example. Keep things transparent and you
make debugging a LOT easier.

Agreed; for user defined collection types there would not be a
performance benefit over converting a tuple, since this is exactly
what will happen anyway, but for collection types derived from any of
the builtins, python could optimize away the intermediate and
construct the desired collection type directly from the available
information.
 
S

Steven D'Aprano

On Mon, 12 Dec 2011 09:29:11 -0800, Eelco wrote:

[quoting Jussi Piitulainen said:
They recognize modular arithmetic but for some reason insist that there
is no such _binary operation_. But as I said, I don't understand their
concern. (Except the related concern about some programming languages,
not Python, where the remainder does not behave well with respect to
division.)

I've never come across this, and frankly I find it implausible that
*actual* mathematicians would say that. Likely you are misunderstanding a
technical argument about remainder being a relation rather than a
bijunction. The argument would go something like this:

"Remainder is not uniquely defined. For example, the division of -42 by
-5 can be written as either:

9*-5 + 3 = -42
8*-5 + -2 = -42

so the remainder is either 3 or -2. Hence remainder is not a bijection
(1:1 function)."

The existence of two potential answers for the remainder is certainly
correct, but the conclusion that remainder is not a binary operation
doesn't follow. It is a binary relation. Mathematicians are well able to
deal with little inconveniences like this, e.g. consider the square root:

10**2 = 100
(-10)**2 = 100
therefore the square root of 100 is ±10

Mathematicians get around this by defining the square root operator √ as
*only* the principle value of the square root relation, that is, the
positive root. Hence:

√100 = 10 only

If you want both roots, you have to explicitly ask for them both: ±√100

Similarly, we can sensibly define the remainder or modulus operator to
consistently return a non-negative remainder, or to do what Python does,
which is to return a remainder with the same sign as the divisor:
-2

There may be practical or logical reasons for preferring one over the
other, but either choice would make remainder a bijection. One might even
define two separate functions/operators, one for each behaviour.

They might not be willing to define it, but as soon as we programmers
do, well, we did.

Having studied the contemporary philosophy of mathematics, their concern
is probably that in their minds, mathematics is whatever some dead guy
said it was, and they dont know of any dead guy ever talking about a
modulus operation, so therefore it 'does not exist'.

You've studied the contemporary philosophy of mathematics huh?

How about studying some actual mathematics before making such absurd
pronouncements on the psychology of mathematicians?
 
E

Eelco

You've studied the contemporary philosophy of mathematics huh?

How about studying some actual mathematics before making such absurd
pronouncements on the psychology of mathematicians?

The philosophy was just a sidehobby to the study of actual
mathematics; and you are right, studying their works is the best way
to get to know them. Speaking from that vantage point, I can say with
certainty that the vast majority of mathematicians do not have a
coherent philosophy, and they adhere to some loosely defined form of
platonism. Indeed that is absurd in a way. Even though you may trust
these people to be perfectly functioning deduction machines, you
really shouldnt expect them to give sensible answers to the question
of which are sensible axioms to adopt. They dont have a reasoned
answer to this, they will by and large defer to authority.
 
J

Jussi Piitulainen

Steven said:
On Mon, 12 Dec 2011 09:29:11 -0800, Eelco wrote:

[quoting Jussi Piitulainen said:
They recognize modular arithmetic but for some reason insist that
there is no such _binary operation_. But as I said, I don't
understand their concern. (Except the related concern about some
programming languages, not Python, where the remainder does not
behave well with respect to division.)

I've never come across this, and frankly I find it implausible that
*actual* mathematicians would say that. Likely you are
misunderstanding a technical argument about remainder being a
relation rather than a bijunction. The argument would go something
like this:

(For 'bijunction', read 'function'.)

I'm not misunderstanding any argument. There was no argument. There
was a blanket pronouncement that _in mathematics_ mod is not a binary
operator. I should learn to challenge such pronouncements and ask what
the problem is. Maybe next time.

But you are right that I don't know how actual mathematicians these
people are. I'm not a mathematician. I don't know where to draw the
line.

A Finnish actual mathematician stated a similar prejudice towards mod
as a binary operator in a Finnish group. I asked him what is wrong
with Knuth's definition (remainder after flooring division), and I
think he conceded that it's not wrong. Number theorists just choose to
work with congruence relations. I have no problem with that.

He had experience with students who confused congruences modulo some
modulus with a binary operation, and mixed up their notations because
of that. That is a reason to be suspicious, but it is a confusion on
the part of the students. Graham, Knuth, Patashnik contrast the two
concepts explicitly, no confusion there.

And I know that there are many ways to define division and remainder
so that x div y + x rem y = x. Boute's paper cited in [1] advocates a
different one and discusses others.

[1] <http://en.wikipedia.org/wiki/Modulo_operation>

But I think the argument "there are several such functions, therefore,
_in mathematics_, there is no such function" is its own caricature.
"Remainder is not uniquely defined. For example, the division of -42
by -5 can be written as either:

9*-5 + 3 = -42
8*-5 + -2 = -42

so the remainder is either 3 or -2. Hence remainder is not a bijection
(1:1 function)."

Is someone saying that _division_ is not defined because -42 div -5 is
somehow both 9 and 8? Hm, yes, I see that someone might. The two
operations, div and rem, need to be defined together.

(There is no way to make remainder a bijection. You mean it is not a
function if it is looked at in a particular way.)

[The square root was relevant but I snipped it.]
Similarly, we can sensibly define the remainder or modulus operator
to consistently return a non-negative remainder, or to do what
Python does, which is to return a remainder with the same sign as
the divisor: ....
There may be practical or logical reasons for preferring one over
the other, but either choice would make remainder a bijection. One
might even define two separate functions/operators, one for each
behaviour.

Scheme is adopting flooring division, ceiling-ing division, rounding
division, truncating division, centering division, and the Euclidean
division advocated by Boute, and the corresponding remainders. There
is no better way to bring home to a programmer the points that there
are different ways to define these, and they come as div _and_ rem.
 
J

Jussi Piitulainen

Nick said:
They are probably arguing that it's uniquely defined only on ZxN and
that there are different conventions to extend it to ZxZ (the
programming languages problem that you allude to above - although I
don't know what you mean by "does not behave well wrt division").

I think Boute [1] says Standard Pascal or some such language failed to
have x div y + x rem y = x, but I can't check the reference now. That
at least waes what I had in mind. Having x rem y but leaving it
underspecified is another such problem: then it is unspecified whether
the equation holds.

If you choose one convention and stick to it, it becomes a
well-defined binary operation.

That's what I'd like to think.
 
P

Paul Rudin

Steven D'Aprano said:
The existence of two potential answers for the remainder is certainly
correct, but the conclusion that remainder is not a binary operation
doesn't follow. It is a binary relation.

This depends on your definition of "operation". Normally an operation is
a function, rather than just a relation.
 
E

Eelco

[quoting Jussi Piitulainen said:
They recognize modular arithmetic but for some reason insist that
there is no such _binary operation_. But as I said, I don't
understand their concern. (Except the related concern about some
programming languages, not Python, where the remainder does not
behave well with respect to division.)
I've never come across this, and frankly I find it implausible that
*actual* mathematicians would say that. Likely you are
misunderstanding a technical argument about remainder being a
relation rather than a bijunction. The argument would go something
like this:

(For 'bijunction', read 'function'.)

I'm not misunderstanding any argument. There was no argument. There
was a blanket pronouncement that _in mathematics_ mod is not a binary
operator. I should learn to challenge such pronouncements and ask what
the problem is. Maybe next time.

But you are right that I don't know how actual mathematicians these
people are. I'm not a mathematician. I don't know where to draw the
line.

A Finnish actual mathematician stated a similar prejudice towards mod
as a binary operator in a Finnish group. I asked him what is wrong
with Knuth's definition (remainder after flooring division), and I
think he conceded that it's not wrong. Number theorists just choose to
work with congruence relations. I have no problem with that.

He had experience with students who confused congruences modulo some
modulus with a binary operation, and mixed up their notations because
of that. That is a reason to be suspicious, but it is a confusion on
the part of the students. Graham, Knuth, Patashnik contrast the two
concepts explicitly, no confusion there.

And I know that there are many ways to define division and remainder
so that x div y + x rem y = x. Boute's paper cited in [1] advocates a
different one and discusses others.

[1] <http://en.wikipedia.org/wiki/Modulo_operation>

But I think the argument "there are several such functions, therefore,
_in mathematics_, there is no such function" is its own caricature.

Indeed. Obtaining a well defined function is just a matter of picking
a convention and sticking with it.

Arguably, the most elegant thing to do is to define integer division
and remainder as a single operation; which is not only the logical
thing to do mathematically, but might work really well
programmatically too.

The semantics of python dont really allow for this though. One could
have:

d, r = a // b

But it wouldnt work that well in composite expressions; selecting the
right tuple index would be messy and a more verbose form would be
preferred. However, performance-wise its also clearly the best
solution, as one often needs both output arguments and computing them
simultaniously is most efficient.

At least numpy should have something like:
d, r = np.integer_division(a, b)

And something similar in the math module for scalars.

Is someone saying that _division_ is not defined because -42 div -5 is
somehow both 9 and 8? Hm, yes, I see that someone might. The two
operations, div and rem, need to be defined together.

(There is no way to make remainder a bijection. You mean it is not a
function if it is looked at in a particular way.)

Surjection is the word you are looking for

That is, if one buys the philosophy of modernists like bourbaki in
believing there is much to be gained by such pedantry.
 
R

rusi

Is someone saying that _division_ is not defined because -42 div -5 is
somehow both 9 and 8? Hm, yes, I see that someone might. The two
operations, div and rem, need to be defined together.
-----------------------------
Haskell defines a quot-rem pair and a div-mod pair as follows:
(from http://www.haskell.org/onlinereport/basic.html)

(x `quot` y)*y + (x `rem` y) == x
(x `div` y)*y + (x `mod` y) == x

`quot` is integer division truncated toward zero, while the result of
`div` is truncated toward negative infinity.
 
C

Chris Angelico

`quot` is integer division truncated toward zero, while the result of
`div` is truncated toward negative infinity.

All these problems just because of negative numbers. They ought never
to have been invented.

At least nobody rounds toward positive infinity... oh wait, that's legal too.

ChrisA
 
A

Arnaud Delobelle

The philosophy was just a sidehobby to the study of actual
mathematics; and you are right, studying their works is the best way
to get to know them. Speaking from that vantage point, I can say with
certainty that the vast majority of mathematicians do not have a
coherent philosophy, and they adhere to some loosely defined form of
platonism. Indeed that is absurd in a way. Even though you may trust
these people to be perfectly functioning deduction machines, you
really shouldnt expect them to give sensible answers to the question
of which are sensible axioms to adopt. They dont have a reasoned
answer to this, they will by and large defer to authority.

Please come down from your vantage point for a few moments and
consider how insulting your remarks are to people who have devoted
most of their intellectual energy to the study of mathematics. So
you've studied a bit of mathematics and a bit of philosophy? Good
start, keep working at it.

You think that every mathematician should be preoccupied with what
axioms to adopt, and why? Mathematics is a very large field of study
and yes, some mathematicians are concerned with these issues (they are
called logicians) but for most it isn't really about axioms.
Mathematics is bigger than the axioms that we use to formalise it.
Most mathematicians do not need to care about what precise
axiomatisation underlies the mathematics that they practise because
they are thinking on a much higher level. Just like we do not worry
about what machine language instruction actually performs each step of
the Python program we are writing.

You say that mathematicians defer to authority, but do you really
think that thousands of years of evolution and refinement in
mathematics are to be discarded lightly? I think not. It's good to
have original ideas, to pursue them and to believe in them, but it
would be foolish to think that they are superior to knowledge which
has been accumulated over so many generations.

You claim that mathematicians have a poor understanding of philosophy.
It may be so for many of them, but how is this a problem? I doesn't
prevent them from having a deep understanding of their field of
mathematics. Do philosophers have a good understanding of
mathematics?

Cheers,
 
J

Jussi Piitulainen

Eelco said:
Indeed. Obtaining a well defined function is just a matter of
picking a convention and sticking with it.

Arguably, the most elegant thing to do is to define integer division
and remainder as a single operation; which is not only the logical
thing to do mathematically, but might work really well
programmatically too.

The semantics of python dont really allow for this though. One could
have:

d, r = a // b

But it wouldnt work that well in composite expressions; selecting the
right tuple index would be messy and a more verbose form would be
preferred. However, performance-wise its also clearly the best
solution, as one often needs both output arguments and computing them
simultaniously is most efficient.

The current Scheme draft does this. For each rounding method, it
provides an operation that provides both the quotient and the
remainder, an operation that provides the quotient, and an operation
that provides the remainder. The both-values operation is more awkward
to compose, as you rightly say.

It's just a matter of naming them all. Python has a good default
integer division as the pair of operators // and %. Python also
supports the returning of several values from functions as tuples. It
can be done.
Surjection is the word you are looking for

Um, no, I mean function. The allegedly alleged problem is that there
may be two (or more) different values for f(x,y), which makes f not a
_function_ (and the notation f(x,y) maybe inappropriate).

Surjectivity is as much beside the point as bijectivity, but I think
we have surjectivity for rem: Z * Z -> Z if we use a definition that
produces both positive and negative remainders, or rem: Z * Z -> N if
we have non-negative remainders (and include 0 in N, which is another
bone of contention). We may or may not want to exclude 0 as the
modulus, or divisor if you like. It is at least a special case.

It's injectivity that fails: 9 % 4 == 6 % 5 == 3 % 2, while Python
quite sensibly has (9, 4) != (6, 5) != (3, 2). (How I love the
chaining of the comparisons.)
That is, if one buys the philosophy of modernists like bourbaki in
believing there is much to be gained by such pedantry.

I think something is gained. Not sure I would call it philosophy.
 
S

Steven D'Aprano

Steven said:
On Mon, 12 Dec 2011 09:29:11 -0800, Eelco wrote:

[quoting Jussi Piitulainen said:
They recognize modular arithmetic but for some reason insist that
there is no such _binary operation_. But as I said, I don't
understand their concern. (Except the related concern about some
programming languages, not Python, where the remainder does not
behave well with respect to division.)

I've never come across this, and frankly I find it implausible that
*actual* mathematicians would say that. Likely you are misunderstanding
a technical argument about remainder being a relation rather than a
bijunction. The argument would go something like this:

(For 'bijunction', read 'function'.)

Oops, you're right of course. It's been about 20 years since I've needed
to care about the precise difference between a bijection and a function,
and I made a mistake. And then to add to my shame, I also misspelt
bijection.

I'm not misunderstanding any argument. There was no argument. There was
a blanket pronouncement that _in mathematics_ mod is not a binary
operator. I should learn to challenge such pronouncements and ask what
the problem is. Maybe next time.

So this was *one* person making that claim?

I understand that, in general, mathematicians don't have much need for a
remainder function in the same way programmers do -- modulo arithmetic is
far more important. But there's a world of difference between saying "In
mathematics, extracting the remainder is not important enough to be given
a special symbol and treated as an operator" and saying "remainder is not
a binary operator". The first is reasonable; the second is not.

But you are right that I don't know how actual mathematicians these
people are. I'm not a mathematician. I don't know where to draw the
line.

A Finnish actual mathematician stated a similar prejudice towards mod as
a binary operator in a Finnish group. I asked him what is wrong with
Knuth's definition (remainder after flooring division), and I think he
conceded that it's not wrong. Number theorists just choose to work with
congruence relations. I have no problem with that.
Agreed.

[...]
(There is no way to make remainder a bijection. You mean it is not a
function if it is looked at in a particular way.)

You're right, of course -- remainder cannot be 1:1. I don't know what I
was thinking.
 
E

Eelco

Please come down from your vantage point for a few moments and
consider how insulting your remarks are to people who have devoted
most of their intellectual energy to the study of mathematics.  So
you've studied a bit of mathematics and a bit of philosophy?  Good
start, keep working at it.

Thanks, I intend to.
You think that every mathematician should be preoccupied with what
axioms to adopt, and why?

Of course I dont. If you wish to restrict your attention to the
exploration of the consequences of axioms others throw at you, that is
a perfectly fine specialization. Most mathematicians do exactly that,
and thats fine. But that puts them in about as ill a position to
judged what is, or shouldnt be defined, as the average plumber.
Compounding the problem is not just that they do not wish to concern
themselves with the inductive aspect of mathematics, they would like
to pretend it does not exist at all. For instance, if you point out to
them a 19th century mathematician used very different axioms than a
20th century one, (and point out they were both fine mathematicians
that attained results universally celebrated), they will typically
respond emotionally; get angry or at least annoyed. According to their
pseudo-Platonist philosophy, mathematics should not have an inductive
side, axioms are set in stone and not a human affair, and the way they
answer the question as to where knowledge about the 'correct'
mathematical axioms comes from is by an implicit or explicit appeal to
authority. They dont explain how it is that they can see 'beyond the
platonic cave' to find the 'real underlying truth', they quietly
assume somebody else has figured it out in the past, and leave it at
that.
You say that mathematicians defer to authority, but do you really
think that thousands of years of evolution and refinement in
mathematics are to be discarded lightly?  I think not.  It's good to
have original ideas, to pursue them and to believe in them, but it
would be foolish to think that they are superior to knowledge which
has been accumulated over so many generations.

For what its worth; insofar as my views can be pidgeonholed, im with
the classicists (pre-20th century), which indeed has a long history.
Modernists in turn discard large swaths of that. Note that its largely
an academic debate though; everybody agrees that 1+1=2. But there are
some practical consequences; if I were the designated science-Tsar,
all transfinite-analysist would be out on the street together with the
homeopaths, for instance.
You claim that mathematicians have a poor understanding of philosophy.
 It may be so for many of them, but how is this a problem?  I doesn't
prevent them from having a deep understanding of their field of
mathematics.  Do philosophers have a good understanding of
mathematics?

As a rule of thumb: absolutely not, no. I dont think I can think of
any philosopher who turned his attention to mathematics that ever
wrote anything interesting. All the interesting writers had their
boots on mathematical ground; Quine, Brouwer, Weyl and the earlier
renaissance men like Gauss and contemporaries.

The fragmentation of disciplines is infact a major problem in my
opinion though. Most physicists take their mathematics from the ivory-
math tower, and the mathematicians shudder at the idea of listning
back to see which of what they cooked up is actually anything but
mental masturbation, in the meanwhile cranking out more gibberish
about alephs. If any well-reasoned philosophy enters into the mix, its
usually in the spare time of one of the physicists, but it is
assuredly not coming out of the philosophy department. There is
something quite wrong with that state of affairs.
 
S

Steven D'Aprano

Arguably, the most elegant thing to do is to define integer division and
remainder as a single operation; which is not only the logical thing to
do mathematically, but might work really well programmatically too.

The semantics of python dont really allow for this though. One could
have:

d, r = a // b

That would be:
(3, 2)


But it wouldnt work that well in composite expressions; selecting the
right tuple index would be messy and a more verbose form would be
preferred. However, performance-wise its also clearly the best solution,
as one often needs both output arguments and computing them
simultaniously is most efficient.

Premature optimization.
 
E

Eelco

Um, no, I mean function. The allegedly alleged problem is that there
may be two (or more) different values for f(x,y), which makes f not a
_function_ (and the notation f(x,y) maybe inappropriate).

Surjectivity is as much beside the point as bijectivity, but I think
we have surjectivity for rem: Z * Z -> Z if we use a definition that
produces both positive and negative remainders, or rem: Z * Z -> N if
we have non-negative remainders (and include 0 in N, which is another
bone of contention). We may or may not want to exclude 0 as the
modulus, or divisor if you like. It is at least a special case.

It's injectivity that fails: 9 % 4 == 6 % 5 == 3 % 2, while Python
quite sensibly has (9, 4) != (6, 5) != (3, 2). (How I love the
chaining of the comparisons.)

My reply was more to the statement you quoted than to yours; sorry for
the confusion. Yes, we have surjectivity and not injectivity, thats
all I was trying to say.

I think something is gained. Not sure I would call it philosophy.

Agreed; its more the notion that one stands to gain much real
knowledge by writing volumnius books about these matters that irks me,
but I guess thats more a matter of taste than philosophy.
 
J

Jussi Piitulainen

rusi said:
-----------------------------
Haskell defines a quot-rem pair and a div-mod pair as follows:
(from http://www.haskell.org/onlinereport/basic.html)

(x `quot` y)*y + (x `rem` y) == x
(x `div` y)*y + (x `mod` y) == x

`quot` is integer division truncated toward zero, while the result of
`div` is truncated toward negative infinity.

Exactly what I mean. (I gave an incorrect equation but meant this.)
 
J

Jussi Piitulainen

Steven said:
So this was *one* person making that claim?

I've seen it a few times from a few different posters, all on Usenet
or whatever this thing is nowadays called. I think I was careful to
say _some_ mathematicians, but not careful to check that any of them
were actually mathematicians speaking as mathematicians.

The context seems to be a cultural divide between maths and cs. Too
much common ground yet very different interests?
I understand that, in general, mathematicians don't have much need
for a remainder function in the same way programmers do -- modulo
arithmetic is far more important. But there's a world of difference
between saying "In mathematics, extracting the remainder is not
important enough to be given a special symbol and treated as an
operator" and saying "remainder is not a binary operator". The first
is reasonable; the second is not.

Yes.
 
E

Eelco

That would be:


(3, 2)

Cool; if only it were in the math module id be totally happy.

Premature optimization.

We are talking language design here, not language use. Whether or not
this is premature is a decision that should be left to the user, if at
all possible, which in this case it very well is; just provide
multiple functions to cover all use cases (only return divisor, only
return remainder, or both)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top