PyWart: The problem with "print"

R

Rick Johnson

This comment shows me that you don't understand the difference between
names, objects, and variables. May sound like a minor quibble, but
there're actually major differences between binding names to objects
(which is what python does) and variables (which is what languages like
C have). It's very clear Rick does not have an understanding of this
either.

Just because someone does not "prefer" this or that aspect of Python, does not mean they don't understand it. I understand "implicit conversion to Boolean" just fine, however, i don't like it. Actually, i hate it! I think it's foolish. It was invented by people who rather save a few keystrokes at the cost writing cryptic code. There are many good reasons for saving keystrokes, implicit conversion to Boolean is NOT one of them.

I make the same argument for "return". Some languages allow implicit returnvalues from functions/methods. When writing a function with Ruby, the return statement is optional:

## START RUBY CODE ##

def foo:
"implicit return"
end

rb> puts foo
"implicit return"

## END RUBY CODE ##

This is one area where Python shines!

In Python, if you fail to use the return statement, then Python will returnNone, NOT some some value that just happens to be the last line executed in the function -- Ruby breaks the law of least astonishment.

The point is we must balance our selfish need to save keystrokes against the need of a reader to QUICKLY understand what he is reading. Python's "explicit return statement" satisfies both requirements, whereas, the optional return value of Ruby does not. We don't want to overly implicit, or overly explicit (as in the nightmare of "public static void foo", which is overkill(at least for a language as high level as Python)).
 
D

Devin Jeanpierre

In Python, if you fail to use the return statement, then Python will return None, NOT some some value that just happens to be the last line executed in the function -- Ruby breaks the law of least astonishment.

Ruby comes from a tradition where this behavior is not astonishing.
Languages do not exist in a vacuum.

-- Devin
 
R

Rick Johnson

[...]
I cannot name a single modern programming language that does NOT have
some kind of implicit boolification.

Congrats: Again you join the ranks of most children who make excuses for their foolish actions along the lines of:

"Hey, they did it first!"

Well, the lemmings get what they deserve i suppose.
 
R

rusi

Say that on a Haskell list, and they'll take it as a challenge. :)

Yes, all programming communities have blind-spots. The Haskell
community's is that Haskell is safe and safe means that errors are
caught at compile-time.

Unfortunately* the halting problem stands. When generalized to Rice
theorem it says that only trivial properties of programs are
algorithmically decidable:
http://mathworld.wolfram.com/RicesTheorem.html

And so the semantic correctness of a program -- a non-trivial property
-- is not decidable.

In short the Haskell dream is a pipe-dream**.

Discussed in more detail here: http://blog.languager.org/2012/08/functional-programming-philosophical.html.

Nevertheless I need to say: If a programmers comes to python from
Haskell, he will end up being a better programmer than one coming from
C or Java or…


* actually fortunately considering that for most of us programming is
our job :)

** Haskellers would point out that there is agda which grows out of
Haskell and in agda one can encode arbitrary properties of types.
 
G

Grant Edwards

Would you say that doubling the testing period is a good thing or a
bad thing?

It could be a neutral thing (ignoring the costs involved).

I once read read an article claiming that as you test (and fix) any
large, complex piece of software, you asymptotically approach a
certain fixed "minimum number of bugs" that's determined by the
system's overall architecture, design and implentation. Once you get
sufficiently close to that minimum, additional testing doesn't reduce
the number/severity of bugs -- it just "moves them around" by creating
additional bugs at the same rate you are eliminating old ones. When
you get to that point, the only way to significantly improve the
situation is to toss the whole thing out and start over with a better
system architecture and/or development model.

After having maintined a few largish pieces of software for well over
a decade, I'm fairly convinced that's true -- especially if you also
consider post-deployment maintenance (since at that point you're
usually also trying to add features at the same time you're fixing
bugs).
 
M

Mark Janssen

Whatever benefit there is in declaring the type of a function is lost due
to the inability to duck-type or program to an interface. There's no type
that says "any object with a 'next' method", for example. And having to
declare local variables is a PITA with little benefit.

Give me a language with type inference, and a nice, easy way to keep duck-
typing, and I'll reconsider. But until then, I don't believe the benefit
of static types comes even close to paying for the extra effort.

Okay, I'm going straighten out you foo(l)s once and for all.

Python has seduced us all into lazy typing. That's what it is.
Manual type checking is obviously inferior to compiler type-checking.
This is what I was trying to tell you all with the post of re-vamping
the Object model.

Python, and I along with it, went towards this idea of a grand god
Object that is the father of everything, but it turned out to be the
wrong direction. Refer to my post on OOPv2.

The fact is, that none of us is close enough to God and the
programming art isn't evolved enough to try to accomplish some grand
generic object at the top of the ObjectModel. It just isn't. We were
better off closer to the machine. Automatic conversion from int to
long was good enough.
--
MarkJ
Tacoma, Washington

P.S. See also PythonThreeThousand on wikiwikiweb
<http://c2.com/cgi/wiki?WikiWikiWeb>
 
R

Rick Johnson

What prevents bugs is the skill of the people writing the code, not the
compiler. Compile-time static type checking is merely a tool, which has
costs and benefits. It is ludicrous to think that any one single tool, or
the lack of that tool, will make all the difference between working code
and non-working code.

Yes, just as ludicrous as thinking that dynamic languages have abolished the evil practice of "type checking".
Static type-checking is no better, or worse, for "critical code" than
dynamic type-checking. One language chooses to deal with some errors at
compile-time, others deal with them at run-time.

Wow, talk about ignoring the elephant in the room! I don't feel i need static typed languages for EVERY problem, however, i'm not foolish enough to believe that "compile time type checking" and "run-time type checking" are even comparable. Oversimplification?
Either way, the
programmer has to deal with them in some way.
A static type system forces you to deal with a limited subset of errors,
"type errors", in one way only: by removing any execution paths in the
software which would assign data of type X to a variable of type Y. For
reasons of machine efficiency, that is often a good was to deal with such
errors. But a dynamic type system makes different trade-offs.
And of course, type errors are such a vanishingly small subset of all the
possible errors that might be made that, frankly, the difference in code
quality between those with static typing and those without is essentially
indistinguishable. There's no evidence that code written in static typed
languages is less buggy than code written in dynamic languages.

WOW! Your skill with the implicit Ad hominem is approaching guru status.

First you cleverly convert the base argument of this ongoing discussion from: "implicit conversion to boolean is bad", into the hot button topic of: "static vs dynamic typing".

In this manner you can ratchet up the emotion of your supporters by employing the same political "slight of hand" used by countless political hacks: "Liberal vs Republican" ring a bell? When there is only two choices, the sheeple can be easily manipulated by the football game. Especially when the opposing sides have the same end-goal in mind. It's all a game of diversions you idiots!

Then you go and make a blanket statement that "appears" to weigh the differences of the two styles fairly, when in fact, what you've done is to falsely invalidate the opposition's entire argument based on a complete overstatement (not to mention that you stopped short of explaining the "trade offs" in detail):
And of course, type errors are such a vanishingly
small subset of all the possible errors that might be
made that, frankly, the difference in code quality
between those with static typing and those without is
essentially indistinguishable.

Nice!

Well, then. You've slayed the enemy. If type errors are as rare as you claim, then by golly these crazy people who use static languages are really just fools. I mean, how could they not be? If they were as intelligent as YOU then they would see the truth!

Your attempts at sleight of hand are rather amusing. The whole point of this "implicit conversion to Boolean" discussion hinges around the fact that dynamic languages are okay for small to medium problems ( or for prototypinglarger problems). But they cannot be depended on for mission critical code.. And mission critical does not only encompass manned flights to mars, it could be any code that places your reputation on the line.

I don't want Python to be a static typed language. Neither do i believe that duck typing is wrong for Python. HOWEVER, what i DO believe is that dynamic languages can be unreliable if we do not:

1. Choose to be "explicit enough"

I've explained this in detail Ad-nauseam.

2. Fail to type check where type checking is due.

The second covers type checking objects that enter into new namespaces. That would cover all functions/methods arguments (at a minimum).

We don't need to type check EVERY single object (but of course the programmer could IF he wanted to). If i declare a variable[1] that points to a listin a block of code, then i reference that variable[1] in the same block ofcode, i see no reason to type check that variable[1] first. That is overkill and that is why people are turned off by static languages.

However, if i declare the same variable[1] and then pass that variable[1] into a function, the function should do a type check on all the arguments toguarantee that these arguments are in fact what they should be. This is how you strike a balance between explicit and implicit. This is how you inject sanity into your code bases.


[1]: stop your whining already about "python does not have variables". The word variable has both a technical meaning (which is another example of transforming a word improperly) and a general meaning. When used in a Python context the word "variable" takes on it's general meaning. Feel free to consult a dictionary if your still confused.
 
R

Russ P.

*shrug*



I don't terribly miss type declarations. Function argument declarations

are a bit simpler in Pascal, compared to Python:





Function Add(A, B : Integer) : Integer;

Begin

Add := A + B;

End;





versus





def add(a, b):

if not (isinstance(a, int) and isinstance(b, int)):

raise TypeError

return a + b

Scala also has isInstanceOf[Type] which allows you to do this sort of thing, but of course it would be considered terrible style in Scala.
but not that much simpler. And while Python can trivially support

multiple types, Pascal cannot. (Some other static typed languages may.)



Whatever benefit there is in declaring the type of a function is lost due

to the inability to duck-type or program to an interface. There's no type

that says "any object with a 'next' method", for example. And having to

declare local variables is a PITA with little benefit.



Give me a language with type inference, and a nice, easy way to keep duck-

typing, and I'll reconsider. But until then, I don't believe the benefit

of static types comes even close to paying for the extra effort.


Scala has type inference. For example, you can write

val x = 1

and the compiler figures out that x is an integer. Scala also has something called structural typing, which I think is more or less equivalent to "duck typing," although I don't think it is used very often.

Are you ready to try Scala yet? :cool:
 
R

Rick Johnson

The second covers type checking objects that enter into new
namespaces. That would cover all functions/methods arguments
(at a minimum).

Yeah, before anyone starts complaining about this, i meant to say "scope". Now you can focus on the *real* argument instead of spending all your time pointing out minutiae.
 
D

Devin Jeanpierre

Super OT divergence because I am a loser nerd:

Yes, all programming communities have blind-spots. The Haskell
community's is that Haskell is safe and safe means that errors are
caught at compile-time.

I don't think Haskell people believe this with the thoroughness you
describe. There are certainly haskell programmers that are aware of
basic theory of computation.
Unfortunately* the halting problem stands. When generalized to Rice
theorem it says that only trivial properties of programs are
algorithmically decidable:
http://mathworld.wolfram.com/RicesTheorem.html

And so the semantic correctness of a program -- a non-trivial property
-- is not decidable.

Just because a problem is NP-complete or undecidable, doesn't mean
there aren't techniques that give the benefits you want (decidability,
poly-time) for a related problem. Programmers often only or mostly
care about that related problem, so it isn't the end of the line just
when we hit this stumbling block.

As far as undecidability goes, one possibility is to accept a subset
of desired programs. For example, restrict the language to not be
turing complete, and there is no problem.

Another resolution to the problem of undecidability is to accept a
_superset_ of the collection you want. This permits some programs
without the property we want, but it's often acceptable anyway.

A third approach is to attach proofs, and only accept a program with
an attached and correct proof of said property. This is a huge
concept, vital to understanding complexity theory. It may be
undecidable to find a proof, but once it is found, it is decidable to
check the proof against the program.

Haskell takes something akin to the second approach, and allows errors
to exist which would require "too much work" to eliminate at compile
time. (Although the type system is a literal case of the first
resolution). Python, by contrast, often values flexibility over
correctness, regardless of how easy it might be to check an error at
compile time. The two languages have different philosophies, and that
is something to respect. The reduction to Rice's theorem does not
respect the trade-off that Haskell is making, it ignores it. It may be
a "pipe dream" to get everything ever, but that's not to say that the
entire approach is invalid and that we should ignore how Haskell
informs the PL discourse.

For some reason both the Python and Haskell communities feel the other
is foolish and ignorant, dismissing their opinions as unimportant
babbling. I wish that would stop.

-- Devin
 
A

alex23

Okay, I'm going straighten out you foo(l)s once and for all.

Gosh, really?! THANKS.
Python has seduced us all into lazy typing.  That's what it is.

Bulshytt. If you have no idea what polymorphism is, you shouldn't even
be participating in this conversation.
The fact is, that none of us is close enough to God

More bulshytt.
 
D

Dan Stromberg

Congrats: Again you join the ranks of most children who make excuses for
their foolish actions along the lines of:

"Hey, they did it first!"

Well, the lemmings get what they deserve i suppose.

Lemmings don't really jump off cliffs. The Disney film documenting it was
staged by a film crew who'd -heard- Lemmings did, and forced the little
guys over a cliff in the name of saving time.

http://en.wikipedia.org/wiki/Lemming
 
M

Mark Janssen

Python has seduced us all into lazy typing. That's what it is.
Bulshytt. If you have no idea what polymorphism is, you shouldn't even
be participating in this conversation.

I am aware of what it means, but Python doesn't really have it
(although it may evolve to it with annotations). But then these
debates were over a decade ago.

MarkJ
Tacoma, Washington
 
A

alex23

I am aware of what it means, but Python doesn't really have it

You really need to stop commenting when you clearly have no
understanding of what you're talking about.
 
S

Steven D'Aprano

I am aware of what it means, but Python doesn't really have it (although
it may evolve to it with annotations).

No polymorphism huh?


py> len([1, 2, 3]) # len works on lists
3
py> len((1, 2)) # and on tuples
2
py> len({}) # and on dicts
0
py> len('I pity the fool') # and on strings
15
py> len(b'\x23') # and on bytes
1
py> len(set(range(2))) # and on sets
2
py> len(frozenset(range(4))) # and on frozensets
4
py> len(range(1000)) # and on range objects
1000


Looks pretty polymorphic to me.
 
R

rusi

Just because a problem is NP-complete or undecidable, doesn't mean
there aren't techniques that give the benefits you want (decidability,
poly-time) for a related problem. Programmers often only or mostly
care about that related problem, so it isn't the end of the line just
when we hit this stumbling block.

As far as undecidability goes, one possibility is to accept a subset
of desired programs. For example, restrict the language to not be
turing complete, and there is no problem.

Another resolution to the problem of undecidability is to accept a
_superset_ of the collection you want. This permits some programs
without the property we want, but it's often acceptable anyway.

A third approach is to attach proofs, and only accept a program with
an attached and correct proof of said property. This is a huge
concept, vital to understanding complexity theory. It may be
undecidable to find a proof, but once it is found, it is decidable to
check the proof against the program.

Haskell takes something akin to the second approach, and allows errors
to exist which would require "too much work" to eliminate at compile
time. (Although the type system is a literal case of the first
resolution). Python, by contrast, often values flexibility over
correctness, regardless of how easy it might be to check an error at
compile time. The two languages have different philosophies, and that
is something to respect. The reduction to Rice's theorem does not
respect the trade-off that Haskell is making, it ignores it. It may be
a "pipe dream" to get everything ever, but that's not to say that the
entire approach is invalid and that we should ignore how Haskell
informs the PL discourse.

Nice 3-point summary. Could serve as a good antidote to some of the
cargo-culting that goes on under Haskell.
To make it very clear: In any science, when there are few people they
probably understand the science. When the numbers explode, cargo-cult
science happens. This does not change the fact that a few do still
understand. Haskell is not exception. See below
I don't think Haskell people believe this with the thoroughness you
describe. There are certainly haskell programmers that are aware of
basic theory of computation.

Of course! Here's cmccann from Haskell weekly news of May 31: [On
reimplementing cryptography in pure Haskell] writing in Haskell lets
you use type safety to ensure that all the security holes you create
are subtle instead of obvious.

Which is showing as parody exactly what I am talking of: All errors
cannot be removed algorithmically/mechanically.

And here's Bob Harper -- father of SML -- pointing out well-known and
less well-known safety problems with Haskell:
http://existentialtype.wordpress.com/2012/08/14/haskell-is-exceptionally-unsafe/


----------
Super OT divergence because I am a loser nerd:

Uh? Not sure I understand…
OT: OK: How can you do programming if you dont understand it?
I guess in a world where majority do it without understanding, someone
who understands (more) will be called 'nerd'?
For some reason both the Python and Haskell communities feel the other
is foolish and ignorant, dismissing their opinions as unimportant
babbling. I wish that would stop.

Dunno whether you are addressing me specifically or python folks
generally.
If me, please remember my post ended with
If a programmer comes to python from Haskell, he will end up being a better programmer than one
coming from C or Java or…

If addressed generally, I heartily agree. My proposed course:
https://moocfellowship.org/submissi...rogramming-languaging-with-haskell-and-python
is in this direction. That is it attempts to create a new generation
of programmers who will be able to use Haskell's theory-power to pack
an extra punch into batteries-included python.

More details: http://blog.languager.org/2013/05/dance-of-functional-programming.html
 
M

Mark Janssen

I am aware of what it means, but Python doesn't really have it (although
it may evolve to it with annotations).

No polymorphism huh?


py> len([1, 2, 3]) # len works on lists
3
py> len((1, 2)) # and on tuples
2
py> len({}) # and on dicts
0
py> len('I pity the fool') # and on strings
15
py> len(b'\x23') # and on bytes
1
py> len(set(range(2))) # and on sets
2
py> len(frozenset(range(4))) # and on frozensets
4
py> len(range(1000)) # and on range objects
1000

Okay, wow, it looks like we need to define some new computer science
terms here.

You are making an "outside view of a function" (until a better term is
found). So that give you one possible view of polymorphism. However,
*within* a class that I would write, you would not see polymorphism
like you have in C++, where it is within the *function closure*
itself. Instead you would see many if/then combinations to define
the behavior given several input types. I would call this simulated
polymorphism.

But don't quote me on this because I have to review my 20 years of CS
and see if it matches what the field says -- if the field has settled
on a definition. If not, I go with the C++ definition, and there it
is very different than python.

But then, you weren't going to quote me anyway, right?
 
R

rusi

No polymorphism huh?
py> len([1, 2, 3])  # len works on lists
3
py> len((1, 2))  # and on tuples
2
py> len({})  # and on dicts
0
py> len('I pity the fool')  # and on strings
15
py> len(b'\x23')  # and on bytes
1
py> len(set(range(2)))  # and on sets
2
py> len(frozenset(range(4)))  # and on frozensets
4
py> len(range(1000))  # and on range objects
1000

Okay, wow, it looks like we need to define some new computer science
terms here.

Fairly definitive terms have existed since 1985:
http://lucacardelli.name/Papers/OnUnderstanding.A4.pdf
You are making an "outside view of a function" (until a better term is
found).  So that give you one possible view of polymorphism.  However,
*within* a class that I would write, you would not see polymorphism
like you have in C++,  where it is within the *function closure*
itself.   Instead you would see many if/then combinations to define
the behavior given several input types.  I would call this simulated
polymorphism.

Cardelli and Wegner cited above call this ad-hoc polymorphism.
What you are calling polymorphism, they call universal polymorphism.

See sect 1.3 for a summary diagram
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top