Towards faster Python implementations - theory

S

sturlamolden

But the relevant bit of your last paragraph is at the start:
"We should...".

Sorry, bad choice of words.
see it faster. That's great. But unless people
puts their money where their mouths are, I don't

I know, I know. But that doesn't stop me from envying what the Lisp
community has achieved.

Python still sucks if we are using it for scientific simulations,
testing CPU-bound algorithms, etc. Sure it is only 150-200 times
slower than C for these tasks, but that can amount to the difference
between one day and half a year of CPU time. But as strange as it may
seem, it can still be advantageous to use Python. E.g. it may be less
expensive to run the simulation in parallel on 200 CPUs than writing
the code in C instead of Python.
 
S

sturlamolden

Unfortunately, native machine code depends on the machine, or at least the
machine being emulated by the hardware. Fortunately or not, the dominance
of the x386 model makes this less of a problem.

CMUCL and SBCL depends on the dominance of the x86 architecture.

GCL uses the GCC backend, which supports a wide range of
architectures.

Building a compiler backend is not needed for a Python JIT, one can
accept the GPL license and use GCC as a backend.

Or one could translate between Python and Lisp on the fly, and use a
compiled Lisp (CMUCL, SBCL, Franz, GCL) as runtime backend.
 
J

John Nagle

Tim said:
I doubt if anyone disputes the gist of what you're
saying[*], viz that Python could be made faster by using
technique (a), (b) or (c) which have been successful elsewhere. At least
that it's worth investgating.

But the relevant bit of your last paragraph is at the start:
"We should...". Unless someone or someones has the time,
inclination, money, backing, wherewithal etc. to implement
this or any other measure of speeding-up, it's all
pie-in-the-sky. Useful, maybe, as discussion of what
options are viable, but a project of this magnitude
doesn't just happen in some developer's lunchbreak.

Focusing may help. Between Jython, PyPy, and Shed Skin,
enough effort has been put in to produce something better than
CPython, but none of those efforts resulted in something more
useable than CPython.

There's a "commercial grade Python" from ActiveState, but
that's CPython in a cardboard box, I think.

Another problem is that if the language is defined as
"whatever gets put in CPython", that discourages other
implementations. The language needs to be standards-based.

John Nagle
 
P

Paul Boddie

Another problem is that if the language is defined as
"whatever gets put in CPython", that discourages other
implementations. The language needs to be standards-based.

Indeed. This was suggested by one of the speakers at last year's
EuroPython with reference to the various proposals to remove map,
reduce, lambda and so on from the language. The opinion was that if
Python implementations change and leave the users either on
unsupported releases or with the work of migrating their code
continuously and/or to features that they don't find as intuitive or
appropriate, some people would rather migrate their code to a language
which is standardised and which can remain agreeable for the
foreseeable future.

Paul
 
T

Terry Reedy

| I know, I know. But that doesn't stop me from envying what the Lisp
| community has achieved.

But do not let your envy stop you from noticing and appreciating what the
Python commnunity has achieved.

| Python still sucks if we are using it for scientific simulations,

Not if you use extensions compiled from C or Fortran. Doing so is not
cheating, any more than using the C-coded methods of the builtin types.
Leveraging existing code and compilers was part of Python's design.

With the Numeric extensions, produced by people at the US nuke labs.
scientific simulations were, I think, Python's first killer ap.

| Sure it is only 150-200 times slower than C for these tasks,

As a general statement, nonsense. A LinPack inversion of a 10k x 10k
matrix takes the same time whether called from Python or a C program. The
miniscule extra overhead of Python is more than made up for by the ability
to call LinPack and other functions interactively.

The extended buffer protocol, championed by Travis Oliphant and slated for
3.0, will make cooperation between extensions much easier.

Terry Jan Reedy
 
R

Robert Brown

sturlamolden said:
CMUCL and SBCL depends on the dominance of the x86 architecture.

CMUCL and SBCL run on a variety of architectures, including x86, 64-bit x86,
PowerPC, Sparc, Alpha, and Mips. See

http://www.sbcl.org/platform-table.html

for platform support information.
Or one could translate between Python and Lisp on the fly, and use a
compiled Lisp (CMUCL, SBCL, Franz, GCL) as runtime backend.

This has been done by Willem Broekema. PLPython is a Python implementation
that translates Python source into Common Lisp at read time. Under the
covers, the Lisp is compiled into machine code and then run. See

http://trac.common-lisp.net/clpython/

Currently, CLPython uses some non-standard Allegro Common Lisp features, so
it does not run on all the free implementations of ANSI Common Lisp. The
implementation is interesting, in part because it shows how expensive and
complex some Python primitives are.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,481
Members
44,900
Latest member
Nell636132

Latest Threads

Top