learnpython.org - an online interactive Python tutorial

T

Tim Chase

No, the int 1 should be cast to a string, and the result
should be the string '11'.

Oh, come on...clearly if you're adding mixed types, there's
significance to the difference, so the result should obviously be
the complex number

(1+1j)

Or maybe it should be the tuple

(1,1)

Or did I mean the list

[1,1]

It's all so obvious... :)

I didn't mind the auto-promotion (as much) in VB6 when I had an
explicit concat operator

1 & "1" ' returns "11"

vs

1 + "1" ' returns 2

-tkc
 
D

Dennis Lee Bieber

I like what you have done. Was it deliberate that your site teaches
python 2.x code rather than 3.x?

Pardon? Just who were you responding to? I'm pretty sure, /I/ don't
have site pushing Python code (though I am still running 2.5.2; I'm too
lazy to spend a week finding out if all the 3rd party modules I have
loaded have been updated).
 
H

harrismh777

Chris said:
Wow, someone else who knows REXX and OS/2! REXX was the first bignum
language I met, and it was really cool after working in BASIC and
80x86 assembly to suddenly be able to work with arbitrary-precision
numbers!

Yes, my "big num" research stuff was initially done in REXX, on VM/CMS.
I later ported my libraries over to OS/2 and continued with that well
into the '90s, when I discovered Unix and 'bc'. Many folks are not
aware that 'bc' also has arbitrary precision floating point math and a
standard math library. It is much faster than REXX because the libraries
are written in C, and unlike REXX they do not have to be implemented in
the interpreter. The syntax of 'bc' is C-like, which is its only
down-side for new students who have never had any language training.
REXX was a great business language, particularly when CMS Pipelines was
introduced.

Just for fun, and a trip down memory lane, you can take a look at some
of my ancient REXX code... this code is a REXX math library designed to
be used from the command line, written in REXX. The primary scientific
functions were implemented, as well as some code to calculate PI...
most of the algorithms can be ported to 'bc' easily, but the 'bc'
algorithms will run much faster, of course.

Back in the day, REXX was the 'new BASIC'

.... now I use Python.
 
S

Steven D'Aprano

Heiko said:
The difference between strong typing and weak typing is best described
by:

Python 2.6.5 (r265:79063, Jun 12 2010, 17:07:01) [GCC 4.3.4 20090804
(release) 1] on cygwin Type "help", "copyright", "credits" or "license"
for more information.
Traceback (most recent call last):
Yes. And you have managed to point out a serious flaw in the overall
logic and consistency of Python, IMHO.

Strings should auto-type-promote to numbers if appropriate.

If that's a "serious" flaw, it's a flaw shared by the vast majority of
programming languages.

As for the question of "consistency", I would argue the opposite: that
auto-promoting strings to numbers arguably is useful, but that is what is
inconsistent, not the failure to do so.

Consider...

"one" + 1

Should this also promote "one" to integer 1? If so, what about "uno" and
"un" and "ein" and "один"?

If not, why privilege one *string* representation of the number 1 over
other *string* representations of the number 1?

How about this?

"[1, 2, 3]" + [4]

Should that auto-promote to a list as well? If not, why not? Why does
your argument in favour of auto-promotion to int also not apply to auto-
promotion to list?

What about this?

"[2, 40, 10, 3]".sort()

Should that auto-promote to list? Should the result of sorting be
[2, 3, 10, 40] or ['10', '2', '3', '40']?

What about:

"[2, 4, 1, 3]".index("[")

Should that call the *string* index method, or the *list* index method?


If you want to argue that auto-promoting of strings to numbers is
convenient for lazy programmers who can't be bothered keeping the
distinction between strings and numbers straight, or for those who can't
base the extra typing required to call int() but don't mind the
inefficiency of the language repeatedly converting numbers to and from
strings in the background, then I'd agree that the "convenience" argument
is an argument in favour of weak-typing. (Not necessarily a *good*
argument, but it's an argument.)

But I hope it is clear that "consistency" is not an argument in favour of
weak-typing. As far as I know, no language applies weak-typing broadly to
all types, and if a language did, it would be fraught with problems and
traps.


[...]
My feelings about this are strongly influenced by my experiences with
the REXX language on IBM's SAA systems--- OS/2 and VM/CMS. In REXX
everything is a string... everything.

This is much like my experience with Apple's Hypertalk, where the only
data structure is a string. I'm very fond of Hypertalk, but it is hardly
designed with machine efficiency in mind. If you think Python is slow
now, imagine how slow it would be if every expression had to be converted
from a number back into a string, and vice versa, after every operation:

x = str(int("1") + int("2"))
y = str(int("9")/int("3"))
z = str(int(x) - int(y))
flag = str(int(z) == int("0"))

only implicitly, by the interpreter.
 
C

Chris Angelico

This is much like my experience with Apple's Hypertalk, where the only
data structure is a string. I'm very fond of Hypertalk, but it is hardly
designed with machine efficiency in mind. If you think Python is slow
now, imagine how slow it would be if every expression had to be converted
from a number back into a string, and vice versa, after every operation:

x = str(int("1") + int("2"))
y = str(int("9")/int("3"))
z = str(int(x) - int(y))
flag = str(int(z) == int("0"))

only implicitly, by the interpreter.

Except that it wouldn't bother with a native integer implementation,
would it? With a string-is-bignum system, it could simply do the
arithmetic on the string itself, with no conversions at all.

Re harrismh's code: For that sort of work, I used and still use the
REXXTry program that comes with OS/2 (written, I believe, by Mike
Cowlishaw), with a modified input routine that gives readline-style
capabilities. Dragging this vaguely back on topic, the end result is
rather similar in feel to IDLE or Hilfe (Pike's interactive
interpreter).

Chris Angelico
 
C

Cameron Simpson

[...]
| Yes, my "big num" research stuff was initially done in REXX, on
| VM/CMS. I later ported my libraries over to OS/2 and continued with
| that well into the '90s, when I discovered Unix and 'bc'. Many
| folks are not aware that 'bc' also has arbitrary precision floating
| point math and a standard math library.

Floating point math? I thought, historically at least, that bc is built
on dc (arbitrary precision integer math, reverse polish syntax) and that
consequently bc uses fixed point math rather than floating point.

Cheers,
--
Cameron Simpson <[email protected]> DoD#743
http://www.cskk.ezoshosting.com/cs/

From sci.physics:
(e-mail address removed):
The only problem is, how do you send a message from Earth to Mars
instantly? Does anyone have any ideas about where we can start?
John Baez <[email protected]>:
Just use a coordinate system in which the point at which the message is
received has the same t coordinate as the point at which the message was sent.
 
H

harrismh777

Cameron said:
| folks are not aware that 'bc' also has arbitrary precision floating
| point math and a standard math library.

Floating point math? I thought, historically at least, that bc is built
on dc (arbitrary precision integer math, reverse polish syntax) and that
consequently bc uses fixed point math rather than floating point.

My bad... I don't mean under-the-covers... I mean that the user may
calculate arbitrary precision floating arithmetic ... bc keeps track of
the decimal point and displays the number of digits the user specifies;
arbitrary precision calculator. (loose language, sorry)

On a *nix system, Mac OSx, Linux, run this to get 1000+ digits of PI:

time echo "scale = 1010; 16 * a(1/5) - 4 * a(1/239)" |bc -lq



scale sets the precision, -lq loads the math library arctan() quiet.
 
H

harrismh777

Steven said:
If that's a "serious" flaw, it's a flaw shared by the vast majority of
programming languages.

Yes, agreed.
As for the question of "consistency", I would argue the opposite: that
auto-promoting strings to numbers arguably is useful, but that is what is
inconsistent, not the failure to do so.

I see your point... but I'll push back just a little, in a minute...
Consider...

"one" + 1

"[1, 2, 3]" + [4]

"[2, 40, 10, 3]".sort()

Yes, maybe problems all. Again, I see the point of your argument,
and I do not necessarily disagree...

I've been giving this some more thought. From the keyboard, all I am
able to enter are character strings (not numbers). Presumably these are
UTF-8 strings in python3. If I enter the character string 57 then
python converts my character string input and returns an reference to
object 57. If I enter the keyboard character string 34567 then the
python interpreter converts my character string input into an object
(creates a new object) (34567) returning a reference to that object (a
type int). If I enter the keyboard character string 3.14 the python
interpreter creates a float object (converts my string into a float) and
returns a reference to the object. In any event, keyboard character
strings that just happen to be numbers, are converted into int or float
objects by the interpreter, from the keyboard, automatically.
My idea for consistency is this: since the interpreter converts int
to float, and float to imaginary (when needed), then it does (at least
in a limited way) type promoting. So, it is consistent with the input
methods generally to promote numeric string input to int --> float -->
imaginary, as needed, therefore it is also consistent to promote a
numeric string (that which I used to call a REXX number) into an int -->
float --> imaginary, as implied by the statement(s). I am not asking for
weak typing generally... nor am I thinking that all promotions are a
good thing... just numeric string to int --> float --> imaginary when it
makes sense. In any event, the programmer should be able to overide the
behavior explicitly. On the other hand, I do see your point regarding
performance. Otherwise,...

... type promotions really would not cause any more confusion for
programmers who now know that int will be converted to float and use
that knowledge conveniently....

I do believe that explicit is better than implicit (generally)...
but not in this case. I am noticing a pattern in some of the responses,
and that pattern is that some folks would find this appealing if not
overtly convenient. The question IS, which I am not able to answer at
this point, whether the convenience would actually cause other
difficulties that would be worse...?


kind regards,
m harris
 
S

Steven D'Aprano

Except that it wouldn't bother with a native integer implementation,
would it? With a string-is-bignum system, it could simply do the
arithmetic on the string itself, with no conversions at all.

I can assure you that Hypertalk had no BigNum system. This was in the
days of Apple Mac when a standard int was 16 bits, although Hypertalk
used 32 bit signed long ints.

But a text-string based bignum would be quite inefficient. Consider the
relatively small number 256**100. Written out in a string it requires 240
digits, with one byte per digit that's 240 bytes. Python stores longints
in base 256, which requires 100 bytes (plus some overhead because it's an
object):
122

I suppose an implementation might choose to trade off memory for time,
skipping string -> bignum conversations at the cost of doubling the
memory requirements. But even if I grant you bignums, you have to do the
same for floats. Re-implementing the entire floating point library is not
a trivial task, especially if you want to support arbitrary precision
floats.
 
C

Chris Angelico

suppose an implementation might choose to trade off memory for time,
skipping string -> bignum conversations at the cost of doubling the
memory requirements. But even if I grant you bignums, you have to do the
same for floats. Re-implementing the entire floating point library is not
a trivial task, especially if you want to support arbitrary precision
floats.

Or just arbitrary precision decimal strings, which aren't "floats" at
all. But to be honest, I've never looked at any implementation of REXX
(and the open source implementations do seem to be inferior to IBM's
OS/2 implementation, I think - haven't done any empirical testing
though), so I can't say how it's done. But it seems to my small
understanding that it'd be simpler to just work with the base 10
string than to convert it to base 256 or base 2**32 or something, just
because you skip the conversions. Obviously this makes REXX a poor
choice for heavy HEAVY computation, but it's potentially faster for
things that involve a little computation and a lot of string
manipulation (which is where REXX does well).

Chris Angelico
 
S

Steven D'Aprano

I've been giving this some more thought. From the keyboard, all I am
able to enter are character strings (not numbers). Presumably these are
UTF-8 strings in python3. If I enter the character string 57 then
python converts my character string input and returns an reference to
object 57.

That depends on where you enter it. If you enter it in between a pair of
quotation marks, no conversion is done.

But I'll grant you that there are many places where the user/programmer
communicates to the Python interpreter, and can generally only do so via
characters or bytes. But that's hardly unique to numbers -- a dict is a
rich compound data structure, but you can only create dicts by entering
them as characters too. That's what we call syntax.

But the point is, once you have entered such a dict into the Python
virtual machine:

d = {'spam': lambda x: x**2 + 3*x - 5, 42: ['a', 'b', 23, object(), []],
17: (35, 19, True), 'x': {'cheese': 'cheddar'},
'y': {'animal': 'aardvark'}, 'z': None}

Python no longer needs to convert it or its components back and forth
between a string and the in-memory data structure. Converting to or from
strings tend to be used in only a very few places:

* compiling from source code, including eval and exec;
* the interactive interpreter;
* some, but not all, serialization formats (e.g. JSON and YAML);
* printing the object repr;
* explicitly converting to string;

which is a big win for both speed and memory. You'll note that every
single one of those is a special case of Input/Output.


[...]
My idea for consistency is this: since the interpreter converts int
to float, and float to imaginary (when needed), then it does (at least
in a limited way) type promoting.

You are conflating lexing/parsing/compiling code with executing code.
Just because you need to have a plumber install your water pipes when
building a house, doesn't make it either practical or desirable to call a
plumber in every time you want to turn a tap on.

I will grant that there are situations where a more implicit type
conversion may be useful, or at least convenient. Perl does what you
want, promoting strings to ints (and visa versa?) depending on what you
try to do with them. Hypertalk, part of Apple's long defunct but not
forgotten Hypercard, was deliberately designed to help non-programmers
program. And I've sometimes experimented with config file formats that do
similar things. So it's not that I think that weak typing in the REXX/
Hypertalk/Perl sense is always wrong, only that it's wrong for Python.

On the other hand, both Flash and Javascript also do weak typing, and the
results in practice can be confusing and fraught with problems:

http://nedbatchelder.com/blog/200708/two_weak_typing_problems.html

And I think this quote from Peter Wone is amusing:

"Weak typing such as is used in COM Variants was an early attempt to
solve this problem, but it is fraught with peril and frankly causes more
trouble than it's worth. Even Visual Basic programmers, who will put up
with all sorts of rubbish, correctly pegged this as a bad idea and
backronymed Microsoft's ETC (Extended Type Conversion) to Evil Type Cast."

http://stackoverflow.com/questions/597664/when-should-weak-types-be-discouraged


It seems to me that weak typing is a Do What I Mean function, and DWIM is
a notoriously bad anti-pattern that causes far more trouble than it is
worth. I'm even a little suspicious of numeric coercions between integer
and float. (But only a little.)
 
D

Dave Angel

My bad... I don't mean under-the-covers... I mean that the user may
calculate arbitrary precision floating arithmetic ... bc keeps track of
the decimal point and displays the number of digits the user specifies;
arbitrary precision calculator. (loose language, sorry)

On a *nix system, Mac OSx, Linux, run this to get 1000+ digits of PI:

time echo "scale = 1010; 16 * a(1/5) - 4 * a(1/239)" |bc -lq



scale sets the precision, -lq loads the math library arctan() quiet.

Wouldn't it be shorter to say:

time echo "scale = 1010; 4 * a(1)" |bc -lq

DaveA
 
J

jmfauth

    I've been giving this some more thought. From the keyboard, all Iam
able to enter are character strings (not numbers). Presumably these are
UTF-8 strings in python3.  If I enter ...


In Python 3, input() returns a unicode, a sequence/table/array of
unicode code point(s). No more, no less.

Similar to Python 2 where raw_input() returns a sequence/table/array
of byte(s). No more, no less.

jmf
 
H

harrismh777

Wouldn't it be shorter to say:

time echo "scale = 1010; 4 * a(1)" |bc -lq

Well, you can check it out by doing the math... (its fun...)

.... you will notice that 'time' is called first, which on *nix systems
clocks the processing, breaking out the system and user times... right?

.... so try these 10,000 comparisons on your *nix system:

time echo "scale = 10010; 16 * a(1/5) - 4 * a(1/239)" |bc -lq

time echo "scale = 10010; 4 * a(1)" |bc -lq

(those will take a few minutes, if your processor is running 2-3Ghz...)

.... then try these 100,000 runs:

time echo "scale = 100010; 16 * a(1/5) - 4 * a(1/239)" |bc -lq

time echo "scale = 100010; 4 * a(1)" |bc -lq

(Those will take some time, probably less than 20 - 30 minutes... )


After your time comparisons, tell me whether you want to use a(1)*4 ??

Leonard Euler came up with the formula I'm using here... and used it
himself for paper 'n pencil arithmetic... because the arctan(n) infinite
series converges much quicker (by orders of magnitude) for values of (n)
< 0. (the smaller the (n) the better)

We can make the entire function run even faster by using smp and
splitting the 'a(1/5)' and 'a(1/239)' across two cores, having the
arctan series' running in parallel. This is important if your 'big num'
is going to be hundreds of thousands or millions of places. On my baby
Beowulf cluster I have played a bit with running this on two separate
systems and then bringing the result together in the end... fun for
playtime... interesting to see how many digits (in how much time) can be
achieved *without* using a super-computer....

You can also try these formula for comparison sake:

PI = 20 * a(1/7) + 8 * a(3/79)
or
PI = 8 * a(1/3) + 4 * a(1/7)
or
PI = 24 * a(1/8) + 8 * a(1/57) + 4 * a(1/239)



Happy Easter, and have a slice of pie on me.... :)





kind regards,
m harris
 
H

harrismh777

Steven said:
It seems to me that weak typing is a Do What I Mean function, and DWIM is
a notoriously bad anti-pattern that causes far more trouble than it is
worth. I'm even a little suspicious of numeric coercions between integer
and float. (But only a little.)

I'm wondering about that as well... (a little)... I mean, maybe the way
to be really consistent (especially with the Zen of Python, explicit is
better than implicit) that int --> float --> complex (imaginary) should
not occur either !

I think folks would baulk at that though... big-time. :)


So, bottom line here... if my students want to get numbers into their
programs in 3.x then the correct way to handle the imput() would be:

n = int(input("enter num > "))


... and then let the interpreter throw an exception if the input
cannot be type cast to int?


kind regards,
m harris
 
T

Terry Reedy

I'm wondering about that as well... (a little)... I mean, maybe the way
to be really consistent (especially with the Zen of Python, explicit is
better than implicit) that int --> float --> complex (imaginary) should
not occur either !

I think folks would baulk at that though... big-time. :)

Guido regards the number classes as subtypes of abstract number.
Given a==d, and b==e, he believes that after
c = a op b
f = d op e
then c == f should be true (in so far as possible).
This is why he wanted to change int division.

In other words, he wants Python math to pretty much imitate calculators,
on the basis that this is what most users expect and want.

This goes along with Python's general
polymorphism/genericity/duck-typing philosophy. It is no different from
the fact that one can write generic algorithms that give equivalent
answers for equivalent inputs of ordered collections or indexable sequences.
So, bottom line here... if my students want to get numbers into their
programs in 3.x then the correct way to handle the imput() would be:

n = int(input("enter num > "))
Yes.

... and then let the interpreter throw an exception if the input cannot
be type cast to int?

Converted (not cast) to int or float or complex or anything else other
than strl.
 
G

Gregory Ewing

harrismh777 said:
maybe the way
to be really consistent (especially with the Zen of Python, explicit is
better than implicit) that int --> float --> complex (imaginary) should
not occur either !

Applying parts of the Zen selectively can be dangerous.
Practicality also beats purity. I've used a language
where there was no automatic promotion from ints to
floats, and it was a pain in the backside.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,572
Members
45,045
Latest member
DRCM

Latest Threads

Top