Prothon gets Major Facelift in Vers 0.1.0 [Prothon]

G

gabor

Does any Python user have a story of regret of using an object's attribute
when they should not have and encapsulation would have saved their butts?

not exactly...

i don't need restriction, only a hint...

if i look thru some java code, i immediately see what variables/methods
should/can i access... it's not the same for python...

for example at work where i use java, there are classes that have
10-15methods, but only 4public ones... it immediately gets simplier to
understand/use the class


gabor
 
B

Bruno Desthuilliers

Ryan said:
On Sat, 22 May 2004 22:53:01 -0700, simo wrote:

(snip)
Proper encapsulation is needed before the C++ brigade will take
P[y/ro]thon seriously as an OO language, oh and a machinecode compiler
;-)
(snip)

I dont understand why everybody seems to want a machinecode compiler. It
wont make a high-level, dynamically typed language run any faster.

Err... You might want to check some Common Lisp implementations.
--SegPhault
Note that here it should read 'sigfault' !-)

Bruno
 
V

Ville Vainio

Ryan> check out ruby. It is very similar to python, but one of the
Ryan> many benefits it has over python, is ability to distinguish
Ryan> 'real' private, public, and protected
Ryan> variables/methods. Ruby does not allow multiple inheritance,
Ryan> and

Python has private methods too:

class C:
def stuff(self):
_privatestuff(self,2)
print "val is",self.val

def _privatestuff(self, arg):
self.val = arg

Though I would expect the people that read/use my code to understand
that if I prepended a _ on attribute name, they would have the brain
to not use it as a part of the "official" API. If they absolutely need
to use something private (it happens - world has seen several badly
designed APIs), they can. At least they don't need to resort to
various horrible hacks to do it.

I tend to think that emphasizing private/protected access restrictions
is an artifact of not understanding modern programming realities and
dynamics properly. School teaches access labels as an essential
feature of OO (encapsulation), and it takes some real world experience
implementing something of lasting value *fast* to unlearn it.

Ryan> it supports a very powerful mixin system- it's OO mechanisms
Ryan> and syntax generally seem better than pythons.

Not to mention that it's much closer to Smalltalk. And it's much
better than Perl too. Python OO is a hack! It's an add-on, not
built-in like in Ruby!

etc, ad infinitum.

Inter-newsgroup advocacy efforts are rather pointless and mostly serve
to create yet more flamewars. Ruby and Python are within the same 10%
productivity-wise (which one is winning depends on the programmer),
but Python is massively more mature and popular (e.g. going to be
shipping with Nokia S60 smartphones RSN hopefully [1]). You do the
math.

[1] http://www.guardian.co.uk/online/story/0,3605,1182803,00.html

Ryan> I dont understand why everybody seems to want a machinecode
Ryan> compiler. It wont make a high-level, dynamically typed
Ryan> language run any faster.

It could make the language run much faster.
 
M

Mark Hahn

gabor said:
butts?

not exactly...

i don't need restriction, only a hint...

if i look thru some java code, i immediately see what variables/methods
should/can i access... it's not the same for python...

for example at work where i use java, there are classes that have
10-15methods, but only 4public ones... it immediately gets simplier to
understand/use the class

I sounds to me like you want a better documentation solution, not an
encapsulation solution. Now that is something I agree with 100% and have
near the top of the list in Prothon. I personally think docs are a weak
point in Python.
 
M

Mark Hahn

gabor said:
yes, you can implement the needed mechanism in every language, but it's
not always fun....

We'll just have to make sure it's fun in Prothon. Maybe you'll have to
elaborate a little more on what you think is fun encapsulation documentation
and what isn't.

I myself think that putting underbars in front of every var like _var is not
fun. Do you agree?

I also don't think declaring all vars as in "private var" is fun either, do
you? If I did, I'd be using C++.

So to me, documenting the public vars makes the most sense. They need some
explaining anyway. So now the only question is, what is a fun and painless
way to document public vars. It would be nice if it had these properties:

1) There should be some reward or lack of punishment for actually doing the
documentation. I was thinking that Prothon could have some cool doc tool
that programmers would want to use that would choke and refuse to finish
without proper doc definitions. Maybe the interpreter itself could even
give warnings.

2) The doc syntax should be painless, friendly, and intelligent enough so
human-added stuff is minimal.

Any ideas in this area would be greatly welcomed. Implementing a half-baked
scheme would be as good as no scheme because it wouldn't be used.
 
G

gabor

I sounds to me like you want a better documentation solution, not an
encapsulation solution. Now that is something I agree with 100% and have
near the top of the list in Prothon. I personally think docs are a weak
point in Python.

documentation is fine.... the more the better...

but the argument that
more-docs-should-be-enough-because-you-can-document-which-functions-are-private
reminds me a little of the
but-you-can-write-object-oriented-code-in-assembler....

or that you-can-write-object-oriented-code-in-c...

yes, you can implement the needed mechanism in every language, but it's
not always fun....
:)

gabor
 
J

Josiah Carlson

See http://prothon.org.

You use inconsistant descriptions of generators here:
http://prothon.org/tutorial/tutorial11.htm#gen

First you say that all generators must use 'gen' rather than 'def', but
in your example you mix the two...

gen evenOdds(max):
def evens(max):
(body contains a yield)

Based on that same portion of your tutorial, it is not clear that your
example...

gen evenOdds(max):
def evens(max):
num = 0
while num < max:
yield num
num += 2

def odds(max):
num = 1
while num < max:
yield num
num += 2

evens(max)
odds(max)

for i in evenOdds(10):
print i, # prints 0 2 4 6 8 1 3 5 7 9

Actually should produce what you say it should.

Perhaps it is my background in Python that says unless you manually
iterate through evens(max) and odds(max), yielding values as you go along...
for i in evens(max): yield i
for i in odds(max): yield i
....or combine and return the iterator with something like...
return itertools.chain(evens(max), odds(max))
...., you will get nothing when that generator function is evaluated.

I believe that either your documentation or example needs to be fixed.

- Josiah
 
M

Mark Hahn

Josiah said:
You use inconsistant descriptions of generators here:
http://prothon.org/tutorial/tutorial11.htm#gen

First you say that all generators must use 'gen' rather than 'def',
but
in your example you mix the two...

gen evenOdds(max):
def evens(max):
(body contains a yield)

Based on that same portion of your tutorial, it is not clear that your
example...

gen evenOdds(max):
def evens(max):
num = 0
while num < max:
yield num
num += 2

def odds(max):
num = 1
while num < max:
yield num
num += 2

evens(max)
odds(max)

for i in evenOdds(10):
print i, # prints 0 2 4 6 8 1 3 5 7 9

Actually should produce what you say it should.

The code is correct and tested. My programming skills are much better than
my tutorial writing skills :)
Perhaps it is my background in Python that says unless you manually
iterate through evens(max) and odds(max), yielding values as you go
along... for i in evens(max): yield i
for i in odds(max): yield i
...or combine and return the iterator with something like...
return itertools.chain(evens(max), odds(max))
..., you will get nothing when that generator function is evaluated.

I believe that either your documentation or example needs to be fixed.

I'm sure that my tutorial could be clearer, but in my defense I do say in
that section: "only the one outermost "function" should use the "gen"
keyword".

The way it works is that the "gen" keyword is just a special flag to tell
the interpreter to stop rolling up the execution frame stack when a yield
keyword is encountered. This is what allows the functions to be nested. The
yield keyword can be in any function whether it uses the gen keyword or the
def keyword.

I do think you are confusing it with Python, which cannot nest functions
with yield statements as Prothon can.
 
J

Jacek Generowicz

Don't confuse encapsulation with access restriction.
I dont understand why everybody seems to want a machinecode
compiler.

Actually, my impression is that most (at least many) around here don't
want one.
It wont make a high-level, dynamically typed language run any
faster.

Your claim is false.

Proof by counterexapmle:


CL-USER 1 > (defun fib (n)
(if (< n 2)
1
(+ (fib (- n 1)) (fib (- n 2)))))
FIB

CL-USER 2 > (time (fib 35))
Timing the evaluation of (FIB 35)

user time = 66.330
system time = 0.000
Elapsed time = 0:01:06
Allocation = 5488 bytes standard / 328476797 bytes conses
0 Page faults
Calls to %EVAL 8388522
14930352

CL-USER 3 > (compile 'fib)
FIB
NIL
NIL

CL-USER 4 > (time (fib 35))
Timing the evaluation of (FIB 35)

user time = 1.000
system time = 0.000
Elapsed time = 0:00:01
Allocation = 1216 bytes standard / 2783 bytes conses
0 Page faults
14930352

Looks like compiling this partucular high-level dynamically typed
language makes it run considerably faster. Let's repeat the exercise
for 3 more implementations of this particular language I just happen
to have lying around on my machine, and compare it to Python's
performance on the equivalent program:
.... if n<2: return 1
.... return fib(n-1) + fib(n-2)
.... 14930352
20.425565958023071


Here are the results gathered in a table:


Name Interpreted Compiled

LispWorks 66 1.0 s
Clisp 41 9.5 s
CMUCL Got bored waiting 1.5 s
SBCL Compiles everything 1.6 s
Python Compiles everything 20 s


So, we have times of 1.0s, 1.5s, 1.6s, 9.5s and 20s. Now one of those
Common Lisp implementations does NOT compile to native; it compiles to
bytecode. Can you guess which one it is, by looking at the timings ?
 
J

Jacek Generowicz

Jacek Generowicz said:
Name Interpreted Compiled

LispWorks 66 1.0 s
Clisp 41 9.5 s
CMUCL Got bored waiting 1.5 s
SBCL Compiles everything 1.6 s
Python Compiles everything 20 s


So, we have times of 1.0s, 1.5s, 1.6s, 9.5s and 20s. Now one of those
Common Lisp implementations does NOT compile to native; it compiles to
bytecode. Can you guess which one it is, by looking at the timings ?

Just for fun, I threw all the declarations that came to my head at the
Lisp function, making in look thus:

(defun fib (n)
(declare (fixnum n))
(declare (optimize (safety 0) (speed 3) (debug 0)
(space 0) (compilation-speed 0)))
(if (< n 2)
1
(the fixnum
(+ (the fixnum (fib (- n 1)))
(the fixnum (fib (- n 2)))))))

I also tried a C version:

int fib(int n) {
if (n<2) {
return 1;
}
return fib(n-1) + fib(n-2);
}

int main() {
return fib(35);
}

Here's the table with the the results for the above added in:


Name Interpreted Compiled With declarations

LispWorks 66 1.0 1.6
Clisp 41 9.5 9.5
CMUCL Got bored waiting 1.5 0.45
SBCL Compiles everything 1.6 0.49
Python Compiles everything 20
gcc No interactivity 0.29


(I also tried it on Allegro, via their telnet prompt (telnet
prompt.franz.com). The uncompiled version went beyond the CPU limit
they give you; the compiled version without declarations was 400ms;
with declarations was 200ms. Of course, we don't know how their
processor compares to mine.)
 
D

Duncan Booth

Name Interpreted Compiled With declarations

LispWorks 66 1.0 1.6
Clisp 41 9.5 9.5
CMUCL Got bored waiting 1.5 0.45
SBCL Compiles everything 1.6 0.49
Python Compiles everything 20
gcc No interactivity 0.29

Using Psyco speeds things up somewhat. On my machine this test in Python
without Psyco takes 14.31s, adding a call to psyco.full() reduces this to
0.51s
 
J

Jacek Generowicz

Duncan Booth said:
Using Psyco speeds things up somewhat. On my machine this test in Python
without Psyco takes 14.31s, adding a call to psyco.full() reduces this to
0.51s

Good call. How daft of me not to include it.

Here's the table with the psyco result on the same machine as the rest.


Name Interpreted Compiled With declarations Psyco

LispWorks 66 1.0 1.6
Clisp 41 9.5 9.5
CMUCL Got bored waiting 1.5 0.45
SBCL Compiles everything 1.6 0.49
Python Compiles everything 20 0.64
gcc No interactivity 0.29


Could we now just all agree, once and for all, that compiling dynamic
languages to native binary really can give significant speedups?

(No, of course we can't ... oh well :)
 
V

Valentino Volonghi aka Dialtone

Jacek Generowicz said:
... if n<2: return 1
... return fib(n-1) + fib(n-2)
...
14930352
20.425565958023071


Here are the results gathered in a table:


Name Interpreted Compiled

LispWorks 66 1.0 s
Clisp 41 9.5 s
CMUCL Got bored waiting 1.5 s
SBCL Compiles everything 1.6 s
Python Compiles everything 20 s


So, we have times of 1.0s, 1.5s, 1.6s, 9.5s and 20s. Now one of those
Common Lisp implementations does NOT compile to native; it compiles to
bytecode. Can you guess which one it is, by looking at the timings ?

Using this code:

import psyco
psyco.full()

def fib(n):
if n<2: return 1
return fib(n-1) + fib(n-2)

import time
a=time.time()
fib(35)
print time.time()-a

I get 0.617813110352

The C version that you posted, using gcc 3.3.3 with -O3 option:

real 0m0.227s
user 0m0.208s
sys 0m0.001s
 
R

Robin Becker

Jacek Generowicz wrote:

.........
Looks like compiling this partucular high-level dynamically typed
language makes it run considerably faster. Let's repeat the exercise
for 3 more implementations of this particular language I just happen
to have lying around on my machine, and compare it to Python's
performance on the equivalent program:



... if n<2: return 1
... return fib(n-1) + fib(n-2)
...


14930352
20.425565958023071


Here are the results gathered in a table:


Name Interpreted Compiled

LispWorks 66 1.0 s
Clisp 41 9.5 s
CMUCL Got bored waiting 1.5 s
SBCL Compiles everything 1.6 s
Python Compiles everything 20 s


So, we have times of 1.0s, 1.5s, 1.6s, 9.5s and 20s. Now one of those
Common Lisp implementations does NOT compile to native; it compiles to
bytecode. Can you guess which one it is, by looking at the timings ?

I tried a modification in prothon and was suprised at how bad it was. My
windows box started to thrash with fib(35) so I reduced it to fib(25)

#fib.py
def fib(n):
if n<2: return 1
return fib(n-1) + fib(n-2)
print fib(25)


C:\Prothon\pr\test>timethis \Prothon\prothon fib.py

TimeThis : Command Line : \Prothon\prothon fib.py
TimeThis : Start Time : Tue May 25 19:41:32 2004

121393

TimeThis : Command Line : \Prothon\prothon fib.py
TimeThis : Start Time : Tue May 25 19:41:32 2004
TimeThis : End Time : Tue May 25 19:42:24 2004
TimeThis : Elapsed Time : 00:00:52.235

compare with

C:\Prothon\pr\test>timethis python fib.py

TimeThis : Command Line : python fib.py
TimeThis : Start Time : Tue May 25 19:43:02 2004

121393

TimeThis : Command Line : python fib.py
TimeThis : Start Time : Tue May 25 19:43:02 2004
TimeThis : End Time : Tue May 25 19:43:05 2004
TimeThis : Elapsed Time : 00:00:02.673

In fact the python time for fib(35) was about 31.1 seconds (ie less than
prothon for fib(25)) so something is spectacularly amiss with prothon.
 
M

Mark Hahn

Robin Becker said:
In fact the python time for fib(35) was about 31.1 seconds (ie less than
prothon for fib(25)) so something is spectacularly amiss with prothon.

Yes, and that something is that Prothon is pre-alpha and full of debug code.
Take a look at the interpreter loop in interp.c and the reason will be
obvious immediately. We are not going to be addressing efficiency until
after the language is designed in July.

One step at a time...
 
R

Robin Becker

Mark said:
Yes, and that something is that Prothon is pre-alpha and full of debug code.
Take a look at the interpreter loop in interp.c and the reason will be
obvious immediately. We are not going to be addressing efficiency until
after the language is designed in July.

One step at a time...

Wasn't criticising. I expected some degredation from an early version,
but this seems too much. From the memory usage I would guess that
perhaps the frames aren't being released.
 
M

Mark Hahn

Robin Becker said:
Wasn't criticising. I expected some degredation from an early version,
but this seems too much. From the memory usage I would guess that
perhaps the frames aren't being released.

Oh, I didn't realize you were talking about memory. We do have serious
memory and object leaks. Maybe the garbage collector isn't working right.
That area is also waiting for July as we are considering it part of
performance.
 
M

Michael Hudson

Jacek Generowicz said:
Could we now just all agree, once and for all, that compiling
dynamic languages to native binary really can give significant
speedups?

Has anyone really been arguing that? Oh dear. What *I* at least have
been trying to argue against is the idea that *any* compilation to
native code *must* result in a significant speed up...

Cheers,
mwh
 
J

JanC

gabor said:
i only want to WRITE that the variable is private once...
i don't want to deal with _prefixed variables in my whole code...

With the _prefix it's clear in /every/ part of the code that something is
intended for local use...
 
J

Jacek Generowicz

Michael Hudson said:
Has anyone really been arguing that?

Ryan Paul said:
I dont understand why everybody seems to want a machinecode compiler. It
wont make a high-level, dynamically typed language run any faster.

I guess there are different ways of interpreting the plethora of
contributions such as the above. Maybe my I'm misunderstanding them.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top