Verbose and flexible args and kwargs syntax

I

Ian Kelly

Cool; if only it were in the math module id be totally happy.

Probably it's not in math because it's not a thin wrapper around a C
math library function, which is how the module was conceived. There
are already some exceptions in the math module, but I think they are
all newer than divmod.
 
A

Arnaud Delobelle

Thanks, I intend to.


Of course I dont. If you wish to restrict your attention to the
exploration of the consequences of axioms others throw at you, that is
a perfectly fine specialization. Most mathematicians do exactly that,
and thats fine. But that puts them in about as ill a position to
judged what is, or shouldnt be defined, as the average plumber.

You are completely mistaken. Whatever the axiomatisation of the
mathematics that we do, we can still do the same mathematics. We
don't even need an axiomatic basis to do mathematics. In fact, the
formalisation of mathematics has always come after the mathematics
were well established. Euclid, Dedekind, Peano, Zermelo, Frankael,
didn't create axiomatic systems out of nothing. They axiomatised
pre-existing theories.

Axiomatising a theory is just one way of exploring it.
Compounding the problem is not just that they do not wish to concern
themselves with the inductive aspect of mathematics, they would like
to pretend it does not exist at all. For instance, if you point out to
them a 19th century mathematician used very different axioms than a
20th century one, (and point out they were both fine mathematicians
that attained results universally celebrated), they will typically
respond emotionally; get angry or at least annoyed. According to their
pseudo-Platonist philosophy, mathematics should not have an inductive
side, axioms are set in stone and not a human affair, and the way they
answer the question as to where knowledge about the 'correct'
mathematical axioms comes from is by an implicit or explicit appeal to
authority. They dont explain how it is that they can see 'beyond the
platonic cave' to find the 'real underlying truth', they quietly
assume somebody else has figured it out in the past, and leave it at
that.

Again, you are completely mis-representing the situation. In my
experience, most mathematicians (I'm not talking about undergraduate
students here) do not see the axioms are the root of the mathematics
that they do. Formal systems are just one way to explore mathematics.
Of course they can in some cases be very useful and enlightening.

As for inductive reasoning, I really can't understand your point. Of
course mathematicians use inductive reasoning all the time. Where do
you think the Riemann Hypothesis comes from? Or Fermat's last theorem?
Do you think that mathematicians prove results before they even think
about them? On the other hand, a result needs to be proved to be
accepted by the mathematical community, and inductive reasoning is not
valid in proofs. That's in the nature of mathematics.
For what its worth; insofar as my views can be pidgeonholed, im with
the classicists (pre-20th century), which indeed has a long history.
Modernists in turn discard large swaths of that. Note that its largely
an academic debate though; everybody agrees that 1+1=2. But there are
some practical consequences; if I were the designated science-Tsar,
all transfinite-analysist would be out on the street together with the
homeopaths, for instance.

It's telling that on the one hand you criticise mathematicians for not
questioning the "axioms which are thrown at them", on the other hand
you feel able to discard a perfectly fine piece of mathematics, that
of the study of transfinite numbers, because it doesn't fit nicely
with traditional views. The fact is that at the end of the 19th
century mathematics had reached a crisis point.
As a rule of thumb: absolutely not, no. I dont think I can think of
any philosopher who turned his attention to mathematics that ever
wrote anything interesting. All the interesting writers had their
boots on mathematical ground; Quine, Brouwer, Weyl and the earlier
renaissance men like Gauss and contemporaries.

The fragmentation of disciplines is infact a major problem in my
opinion though. Most physicists take their mathematics from the ivory-
math tower, and the mathematicians shudder at the idea of listning
back to see which of what they cooked up is actually anything but
mental masturbation, in the meanwhile cranking out more gibberish
about alephs.

Only a minority of mathematicians have an interest in "alephs", as you
call them. IMHO, the mathematics they do is perfectly valid. The
exploration of the continuum hypothesis has led to the creation of
very powerful mathematical techniques and gives an insight into the
very foundations of mathematics. Again, on the one hand you criticise
mathematicians for not questioning the axioms they work with, but
those who investigate the way these axioms interact you accuse of
"mental masturbation".
 
E

Eelco

'Kindof' off-topic, but what the hell :).

You are completely mistaken.  Whatever the axiomatisation of the
mathematics that we do, we can still do the same mathematics.  We
don't even need an axiomatic basis to do mathematics.  In fact, the
formalisation of mathematics has always come after the mathematics
were well established.    Euclid, Dedekind, Peano, Zermelo, Frankael,
didn't create axiomatic systems out of nothing.  They axiomatised
pre-existing theories.

Axiomatising a theory is just one way of exploring it.

Yes, axiomization is to some extent a side-show. We know what it is
that we want mathematics to be, and we try to find the axioms that
lead to those conclusions. Not qualitatively different from any other
form of induction (of the epistemological rather than mathematical
kind). Still, different axioms or meta-mathematics give subtly
different results, not to mention are as different to work with as
assembler and haskell. There are no alephs if you start from a
constructive basis, for instance.

Im not sure what 'Axiomatising a theory is just one way of exploring
it' means. One does not axiomatize a single theory; that would be
trivial (A is true because thats what I define A to be). One
constructs a single set of axioms from which a nontrivial set of
theorems follow.

The way id put it, is that axiomazation is about being explicit in
what it is that you assume, trying to minimalize that, and being
systematic about what conclusions that forces you to embrace.

Could you be more precise as to how I am 'completely mistaken'? I
acknowledge that my views are outside the mainstream, so its no news
to me many would think so, but it would be nice to know what im
arguing against in this thread precisely.
Again, you are completely mis-representing the situation.  In my
experience, most mathematicians (I'm not talking about undergraduate
students here) do not see the axioms are the root of the mathematics
that they do.  Formal systems are just one way to explore mathematics.
 Of course they can in some cases be very useful and enlightening.

Its your word versus mine I suppose.
As for inductive reasoning, I really can't understand your point.  Of
course mathematicians use inductive reasoning all the time.  Where do
you think the Riemann Hypothesis comes from? Or Fermat's last theorem?
 Do you think that mathematicians prove results before they even think
about them?  On the other hand, a result needs to be proved to be
accepted by the mathematical community, and inductive reasoning is not
valid in proofs.  That's in the nature of mathematics.

We mean something different by the term it seems. What you describe, I
would call intuition. Which is indeed very important in mathematics,
and indeed no substitute for deduction. By induction, I mean the
process of reducing particular facts/observations/theorems to a more
compact body of theory/axioms that imply the same. In a way, its the
inverse of deduction (seeing which body of conclusions follows from a
given set of axioms)

It's telling that on the one hand you criticise mathematicians for not
questioning the "axioms which are thrown at them", on the other hand
you feel able to discard a perfectly fine piece of mathematics, that
of the study of transfinite numbers, because it doesn't fit nicely
with traditional views.  The fact is that at the end of the 19th
century mathematics had reached a crisis point.

It is a shame you both fail to specify by what metric it is a
perfectly fine piece of mathematics, and yet more egregiously, put
words into my mouth as to what I think is wrong with it. That makes
for slow and painful debating.

My objection to transfinite analysis is that it is not scientific, in
the sense that I judge most parts of mathematics to be. I am not
questioning its deductive validity; that something mathematicians can
generally be trusted with. My contention is that the axioms that give
rise to transfinite analysis are of the same 'validity' as any random
set of axioms you could pull from a random number generator. All sets
of axioms have implications, but we dont study all possible sets of
axioms. Studying a random set of axioms leads to an arbitrary number
of nonsensical results; nonsensical in the common day useage of the
word, and nonsensical in a philosophical sense; as not relating to any
sense-impressions, or synthetic propositions.

Transfinite analysis does not give any results of any relevance that
im aware of, but id love to be proven wrong. The fact that we do get
this cancerous outgrowth of implications called transfinite analysis
is a hint that these axioms are borked; not a beautiful view on a
world of truth beyond our senses that only mathematics can give us.
(again, in my minority opinion). Id love to debate you as to where
exactly I suspect things went wrong, but its a lengthy story, and its
really not the right place I suppose; nor the right time, I have to
cook.
Only a minority of mathematicians have an interest in "alephs", as you
call them.

I know.
IMHO, the mathematics they do is perfectly valid.  The
exploration of the continuum hypothesis has led to the creation of
very powerful mathematical techniques and gives an insight into the
very foundations of mathematics.

Like I said, I dont question its deductive validity. As for providing
insight into the deductive process; probably, but so do Sudoku's, and
I dont see them being state-sponsored across the globe. Any self-
created puzzle will do for that purpose; and I suspect the same time
spent on real puzzles has the same effect, plus more.
Again, on the one hand you criticise
mathematicians for not questioning the axioms they work with, but
those who investigate the way these axioms interact you accuse of
"mental masturbation".

In my, admittedly outsider view of things, transfinite analysis are
rather sad deduction machines. Id be delighted if you could show me
one that has done some kind of reflection as to why they so fervently
keep chasing the ghosts that the likes of Hilbert and Russel conjured
up for them, or that have even bothered to expose themselves to the
mockery that the likes of Gauss would have showered upon them (or
Feynmann, for a more recent but imperfect analog).
 
T

Terry Reedy

Arguably, the most elegant thing to do is to define integer division
and remainder as a single operation;

It actually is, as quotient and remainder are calculated together. The
microprocessors I know of expose this (as does Python). 'a divmod b'
puts the quotient in one register and the remainder in another. If you
ask for just one of the two values, both are calculated and one is
grabbed while the other is returned.
which is not only the logical
thing to do mathematically, but might work really well
programmatically too.

The semantics of python dont really allow for this though. One could
have:

d, r = a // b
(3, 1)

With CPython, int.__divmod__ lightly wraps and exposes the processor
operation.
But it wouldnt work that well in composite expressions; selecting the
right tuple index would be messy and a more verbose form would be
preferred.

That is why we haveTrue

In both cases, I believe CPython calls int.__divmod__ (or the lower
level equivalent) to calculate both values, and one is returned while
the other is ignored. It it the same when one does long division by hand.
However, performance-wise its also clearly the best
solution, as one often needs both output arguments and computing them
simultaniously is most efficient.

As indicated above, there is really no choice but to calculate both at
once. If one needs both a//b and a%b, one should explicitly call divmod
once and save (name) both values, instead of calling it implicitly twice
and tossing half the answer each time.
 
R

rusi

E

Eelco

<deja-vu>
We keep having these debates -- so I wonder how off-topic it is...
And so do famous CSists:http://research.microsoft.com/en-us/um/people/gurevich/opera/123.pdf
</deja-vu>

Well, you are right, there are some deep links here. My view of what
is wrong with mainstream mathematics is its strange interpretation of
the semantics of classical logic. (And I dont think any other schools
get it quite right either; I think finitists may avoid the mistakes of
others, but are rightfully accussed of being needlessly restrictive,
for instance)

This is best illustrated by means of the principle of explosion. It
rests on assuming a contradiction, and then assigning rather peculiar
semantics to them. What is typically left unstated are the semantics
of symbol lookup, but apparently it is implicitly understood one can
pick whatever value upon encountering a contradicting symbol. There is
no well defined rule for the lookup of a twice-defined symbol. Of
course the sane thing to do, to a mind grown up around computer
languages, upon encountering a twice defined symbol, is not to
continue to generate deductions from both branches, but to throw an
exception and interrupt the specific line of reasoning that depends on
this contradicting symbol right then and there.

Conceptually, we can see something is wrong with these undefined
semantics right away. A logical system that allows you to draw
conclusions as to where the pope shits from assertions about natural
numbers could not more obviously be broken.

If you dont have this broken way of dealing with contradictions, one
does not have to do one of many silly and arbitrary things to make
infinity work, such as making a choice between one-to-one
correspondence and subset-relations for determining the cardinality of
a set; one can simply admit the concept of infinity, while useful, is
not consistent, keep the contradiction well handled instead of having
it explode in your face (or explode into the field of transfinite
analysis; a consequece of 'dealing' with these issues by rejecting the
intuitively obviously true relation between subset relations and
cardinality), and continue reasoning with the branches of your
argument that you are interested in.

In other words, what logic needs is a better exception-handling
system, which completes the circle with programming languages quite
nicely. :)
 
R

Robert Kern

So this was *one* person making that claim?

I understand that, in general, mathematicians don't have much need for a
remainder function in the same way programmers do -- modulo arithmetic is
far more important. But there's a world of difference between saying "In
mathematics, extracting the remainder is not important enough to be given
a special symbol and treated as an operator" and saying "remainder is not
a binary operator". The first is reasonable; the second is not.

The professional mathematicians that I know personally don't say that "remainder
is not a binary operator". They *do* say that "modulo is not an operator" in
mathematics just because they have reserved that word and the corresponding
notation to define the congruence relations. So for example, the following two
statements are equivalent:

42 = 2 mod 5
2 = 42 mod 5

The "mod 5" notation modifies the entire equation (or perhaps the = sign if you
like to think about it like that), not the term it is immediately next to.
Python's % operator is a binary operator that binds to a particular term, not
the whole equation. The following two are not equivalent statements:

42 == 2 % 5
2 == 42 % 5

It's mostly kvetching on their part that programming language designers
misunderstood the notation and applied the name to something that is confusingly
almost, but not quite, the same thing. They aren't saying that you couldn't
*define* such an operator; they would just prefer that we didn't abuse the name.
But really, it's their fault for using notation that looks like an operator.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
R

rusi

In other words, what logic needs is a better exception-handling
system, which completes the circle with programming languages quite
nicely. :)

Cute... but dangerously recursive (if taken literally)
Remember that logic is the foundation of programming language
semantics.
And your idea (suggests) that programming language semantics be made
(part of) the foundation of logic.

Of course I assume you are not being very literal.
Still the dangers of unnoticed circularity are often... well
unnoticed :)

eg. McCarthy gave the semantics of lisp in lisp -- a lisp interpreter
in lisp is about a page of code.

It probably was a decade before someone realized that the same
semantics would 'work' for lazy or applicative (eager) order
evaluation.

This then begs the question what exactly it means for that semantics
to 'work'...
 
E

Eelco

The professional mathematicians that I know personally don't say that "remainder
is not a binary operator". They *do* say that "modulo is not an operator"in
mathematics just because they have reserved that word and the corresponding
notation to define the congruence relations. So for example, the following two
statements are equivalent:

   42 = 2 mod 5
   2 = 42 mod 5

The "mod 5" notation modifies the entire equation (or perhaps the = sign if you
like to think about it like that), not the term it is immediately next to..
Python's % operator is a binary operator that binds to a particular term,not
the whole equation. The following two are not equivalent statements:

   42 == 2 % 5
   2 == 42 % 5

It's mostly kvetching on their part that programming language designers
misunderstood the notation and applied the name to something that is confusingly
almost, but not quite, the same thing. They aren't saying that you couldn't
*define* such an operator; they would just prefer that we didn't abuse the name.
But really, it's their fault for using notation that looks like an operator.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco

Thanks Robert, I think you cut right through the confusion there.

To tie it back in with python language design; all the more reason not
to opt for pseudo-backwards compatibility. If python wants a remainder
function, call it 'remainder'. Not 'rem', not 'mod', and certainly not
'%'. Its the more pythonic way; a self-describing name, rather than
poorly defined or poorly understood cryptology.
 
R

rusi

It might make more sense to programmers if you think of it as written:

42 = 2, mod 5
2 = 42, mod 5

ChrisA

For the record I should say that the guy who taught me abstract
algebra, said about as much:
He said that the notation
a == b mod n
should be written as
a ==n b
(read the == as 3 horizontal lines and the n as a subscript)
 
E

Eelco

Cute... but dangerously recursive (if taken literally)
Remember that logic is the foundation of programming language
semantics.
And your idea (suggests) that programming language semantics be made
(part of) the foundation of logic.

Of course I assume you are not being very literal.
Still the dangers of unnoticed circularity are often... well
unnoticed :)

Well, logic as a language has semantics, one way or the other. This
circularity is a general theme in epistemology, and one that fits well
with the view of deduction-induction as a closed loop cycle. Knowledge
does not flow from axioms to theorems; axioms without an encompassing
context are meaningless symbols. Its a body of knowledge as a whole
that should be put to the test; the language and the things we express
in it are inseperable. (the not-quite-famous-enough Quine in a
nutshell)

The thing is that our semantics of logic are quite primitive; cooked
up in a time where people spent far less time thinking about these
things, and having a far narrower base of experience to draw ideas
from. They didnt have the luxury of already having grown up studying a
dozen formal languages before embarking on creating their own. It
other words, the semantics of logic is a legacy piece of crap, but an
insanely firmly entrenched one.

I mean, there are many sensible ways of defining semantics of
conflicting symbols, but you'll find on studying these things that the
guys who (often implicitly) laid down these rules didnt even seemed to
have consciously thought about them. Not because they were stupid; far
from it, but for similar reasons as to why the x86 architecture wasnt
concieved of the day after the invention of the transistor.
 
J

Jussi Piitulainen

rusi said:
For the record I should say that the guy who taught me abstract
algebra, said about as much:
He said that the notation
a == b mod n
should be written as
a ==n b
(read the == as 3 horizontal lines and the n as a subscript)

I think the modulus is usually given in parentheses and preferably
some whitespace: in text, a == b (mod n), using == for the triple -,
and in a display:

a == b (mod n).

I think even a == b == c (mod n), without repeating the modulus every
time. (A subscript sounds good if the modulus is simple. Perhaps it
often is.)

That way it does not even look like a binary operator. I think Graham,
Knuth, and Patashnik play it nicely in their book Concrete
Mathematics, where they have both mods: the congruence relation, and
the binary operator. The book is targeted for computer scientists.

As if mathematicians didn't use the exact same notations for different
purposes, even in the same context, and often with no problems
whatsoever as long as all parties happen to know what they are talking
about. Often the uses are analogous, but at least the two main uses of
(x,y) differ wildly. (So Knuth uses (x .. y) for the interval, but he
is a programmer.)
 
T

Terry Reedy

Better, using ascii text, would be
42 =mod5 2
where =mod is a parameterized equivalence relation that is coarser than
= (which is =mod-infinity). divmod(a,inf) = 0,a.
=mod1 is the most coarse relation in that it make every count
equivalent. divmod(a,1) = a,1.
For the record I should say that the guy who taught me abstract
algebra, said about as much:
He said that the notation
a == b mod n
should be written as
a ==n b
(read the == as 3 horizontal lines and the n as a subscript)

The 3 horizontal line symbol is often used for equivalence relations
other than =.
 
A

alex23

Eelco said:
To tie it back in with python language design; all the more reason not
to opt for pseudo-backwards compatibility. If python wants a remainder
function, call it 'remainder'. Not 'rem', not 'mod', and certainly not
'%'.

Good luck with the PEP.
Its the more pythonic way; a self-describing name, rather than
poorly defined or poorly understood cryptology.

"Although practicality beats purity."

I'm still utterly agog that anyone finds the operator % confusing.
 
M

MRAB

Python has "def", "del", "int", "str", "len", and so on. "rem" or "mod"
(Ada has both, I believe) would be in keeping with the language.
Good luck with the PEP.


"Although practicality beats purity."

I'm still utterly agog that anyone finds the operator % confusing.

In financial circles it could be an operator for calculating
percentages, eg. "5 % x" would be 5 percent of x.

It's an oddity, but an established one. :)
 
C

Chris Angelico

In financial circles it could be an operator for calculating
percentages, eg. "5 % x" would be 5 percent of x.

It's an oddity, but an established one. :)

And I would be most sorry to see % renamed to mod in Python.

"Hello, %s! My favourite number is %d." mod ("Fred",42) # This just
looks wrong.

ChrisA
 
A

alex23

And I would be most sorry to see % renamed to mod in Python.

"Hello, %s! My favourite number is %d." mod ("Fred",42)   # This just
looks wrong.

Finally we can give this operator a more fitting name - I propose
'inject' - and put an end to this insane desire to leverage off pre-
existing knowledge of other languages.

Furthermore, I suggest that no two languages should ever have
identical semantics, just to avoid potential confusion.

New concepts for all!
 
E

Eelco

Finally we can give this operator a more fitting name - I propose
'inject' - and put an end to this insane desire to leverage off pre-
existing knowledge of other languages.

Furthermore, I suggest that no two languages should ever have
identical semantics, just to avoid potential confusion.

New concepts for all!

Dont get me started on that one. Its that I never work with strings...

'leverage of pre-existing knowledge'... I would hardly call the
particular names of functions the knowledge about a language.

The only argument that bears any weight with me is backwards
compatibility with itself. Pseudo-backwards compatibility with other
languages, I couldnt not care less for.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top