Help me use my Dual Core CPU!

S

Simon Wittber

I've just bought a new notebook, which has a dual core CPU.

I write cross platform games in Python, and I'd really like to be able
to use this second core (on my machine, and on user's machines) for any
new games I might write.

I know threads won't help (in CPython at least) so I'm investigating
other types of concurrency which I might be able to use. I really like
the PyLinda approach, however I need to be able to pass around all the
simple python types, which PyLinda won't help me with. Also, PyLinda is
more focused on distributing computing; I really only want to have 2
processes cooperating (or 4, if I had 4 CPUs/cores etc).

Is there any cross platform way to share python objects across
processes? (I've found POSH, but it's old, and doesn't appear to be
maintained). I could implement my own object space using shared memory,
but from what I can see, this is not available on Win32.

Are there any other concurrency options I've not discovered yet?


-Sw.
 
P

Paul Rubin

Simon Wittber said:
Are there any other concurrency options I've not discovered yet?

I've been wondering about the different Python MPI bindings that are
out there, and whether they might make sense for general purpose
concurrency when they were designed mostly for parallel numerical
computation. Better MPI libraries should at least be able to
communicate by shared memory--I don't know if Pylinda does that.
 
M

Michael

Simon said:
I've just bought a new notebook, which has a dual core CPU.

I write cross platform games in Python, and I'd really like to be able
to use this second core (on my machine, and on user's machines) for any
new games I might write.

I know threads won't help (in CPython at least) so I'm investigating
other types of concurrency which I might be able to use. I really like
the PyLinda approach, however I need to be able to pass around all the
simple python types, which PyLinda won't help me with. Also, PyLinda is
more focused on distributing computing; I really only want to have 2
processes cooperating (or 4, if I had 4 CPUs/cores etc).

Is there any cross platform way to share python objects across
processes? (I've found POSH, but it's old, and doesn't appear to be
maintained). I could implement my own object space using shared memory,
but from what I can see, this is not available on Win32.

Are there any other concurrency options I've not discovered yet?

We *haven't* implemented process based components for Kamaelia yet, however
if a process component base class (which is something we want to do) was
created that might serve as a possiblity. If you're interested in this I
could chat with you about the ideas we've had about how this would work
(#kamaelia on freenode or here). (We are planning on doing this when we
manage to get sufficient round tuits).

It's probably worth noting that we wouldn't be thinking of using shared
objects, but piping data between the two processes - with the various
options including pickling to memory mapped files (however there's security
issues there aside from anything else...).

Also, Paul Boddie posted a module for parallel systems a while back as well
which might be useful (at least for ideas):
* http://cheeseshop.python.org/pypi/parallel

I'd be interested in helping out BTW :)


Michael.
 
J

John Henry

I don't know what CPython is but I have developed a Python application
under Windows that utilize the Dure Core CPU when it's present.

I don't know that I can say for sure that "threads won't help". Have
you done some testing before using other approaches to see if it indeed
won't help?
 
B

Brian L. Troutwine

John said:
I don't know what CPython is but I have developed a Python application
under Windows that utilize the Dure Core CPU when it's present.

It's the default python implementation, the one you find at python.org.
It happens to be written in C. Other python implementations included
IronPython, PyPy and Jython in .Net, CPython and Java, respectively.
 
S

Simon Wittber

Michael said:
Also, Paul Boddie posted a module for parallel systems a while back as well
which might be useful (at least for ideas):
* http://cheeseshop.python.org/pypi/parallel

I've checked this out, it looks like a good idea which I could build
further on.

I've just noticed that os.fork is not available on Win32. Ouch.

Does that mean there is _no_ way for a single Python program to use
multiple CPU/core systems on Windows? (other than writing an extension
module in C (which is completely unacceptable for me!))

-Sw.
 
S

Simon Wittber

Paul said:
Use the subprocess module.

I can't see how subprocess.Popen can replace a fork. Using a manually
started process is not really viable, as it does not automatically
share pre-built (read-only) data between the processes. If it can, I'd
really like to know how...

Yikes. This is a bummer. The conclusion seems to be, I cannot use any
common cross platform, true concurrency strategies in my games. On top
of that, I can't really use any form of concurrency on Win32.

Lets hope we get some super fast SMP friendly backends for PyPy sooner
rather than later!

-Sw.
 
P

Paul Rubin

Simon Wittber said:
I can't see how subprocess.Popen can replace a fork. Using a manually
started process is not really viable, as it does not automatically
share pre-built (read-only) data between the processes. If it can, I'd
really like to know how...

Either with sockets or mmap.
 
P

Paul Boddie

Simon said:
I've checked this out, it looks like a good idea which I could build
further on.

Feel free to expand on what I've done. The tricky part was making some
kind of communications mechanism with asynchronous properties, and
although I suppose I could have used asyncore, Medusa or Twisted, I was
interested in the exercise of learning more about the poll system call
(and without adding dependencies which dwarf the pprocess module
itself).
I've just noticed that os.fork is not available on Win32. Ouch.

Sorry! My motivation was to support operating systems that I personally
care about and which also have solutions for transparent process
migration, which I believe could be used in conjunction with the
pprocess module for better-than-thread parallelisation. In other words,
you get processes running independently on potentially many CPUs with
only the communications as overhead, and with access to shared data
either through the communications channels or via global variables
which are read only due to the nature of the fork semantics. This
doesn't necessarily play well with threaded-style programs wanting to
modify huge numbers of shared objects, but I'd argue that the benefits
of parallelisation (for performance) are somewhat reduced in such
programs anyway.
Does that mean there is _no_ way for a single Python program to use
multiple CPU/core systems on Windows? (other than writing an extension
module in C (which is completely unacceptable for me!))

Rumour has it that recent versions of Windows provide fork-like
semantics through a system call. Your mission is to integrate this
transparently into the standard library's os.fork function. ;-)

Paul
 
R

Robin Becker

Simon said:
I can't see how subprocess.Popen can replace a fork. Using a manually
started process is not really viable, as it does not automatically
share pre-built (read-only) data between the processes. If it can, I'd
really like to know how...

Yikes. This is a bummer. The conclusion seems to be, I cannot use any
common cross platform, true concurrency strategies in my games. On top
of that, I can't really use any form of concurrency on Win32.

Lets hope we get some super fast SMP friendly backends for PyPy sooner
rather than later!

-Sw.
Nobody seems to have mentioned POSH http://poshmodule.sourceforge.net
which used almost to work. I assume it's busted for later pythons and the author
says it's just a demonstration.

Anandtech demoed an 8 core mac pro machine and were unable to "max out the
cpus". Python needs some kind of multi cpu magic pretty quickly or we'll all end
up using erlang :)

As for subprocess I don't see it as much use unless we can at least determine
the number of cpu's and also set the cpu affinity easy with occam maybe not in
python.
 
P

Paul Rubin

Robin Becker said:
Nobody seems to have mentioned POSH http://poshmodule.sourceforge.net
which used almost to work. I assume it's busted for later pythons and
the author says it's just a demonstration.

Yeah, it's been mentioned.
Anandtech demoed an 8 core mac pro machine and were unable to "max out
the cpus".

You mean with POSH??! And I see a 4 core machine on Apple's site but
not an 8 core.
Python needs some kind of multi cpu magic pretty quickly or we'll
all end up using erlang :)

Heh, yeah ;).
As for subprocess I don't see it as much use unless we can at least
determine the number of cpu's and also set the cpu affinity easy with
occam maybe not in python.

I haven't looked at Occam. I've been sort of interested in Alice
(concurrent ML dialect), though apparently the current version is
interpreted. Erlang is of less somewhat interest to me because it's
dynamically typed like Python. Not that dynamic typing is bad, but
I'm already familiar with Python (and previously Lisp) so I'd like to
try out one of the type-inferenced languages in order to get a feel
for the difference. I'd also like to start using something with a
serious optimizing compiler (MLton, Ocaml) but right now none of these
support concurrency.

I guess I should check into GHC:

http://www.haskell.org/haskellwiki/GHC/Concurrency
 
S

Simon Wittber

Paul said:
Sorry! My motivation was to support operating systems that I personally
care about and which also have solutions for transparent process
migration, which I believe could be used in conjunction with the
pprocess module for better-than-thread parallelisation.

Ah don't be sorry! I don't care about Win32 a whole lot either, its
just that most of my target market use Win32...
Rumour has it that recent versions of Windows provide fork-like
semantics through a system call. Your mission is to integrate this
transparently into the standard library's os.fork function. ;-)

I'm not sure I'm up to this kind of low level stuff, though if the itch
really starts to _itch_, I might have a crack at scratching it. :)


-Sw
 
R

Robin Becker

Paul said:
Yeah, it's been mentioned.


You mean with POSH??! And I see a 4 core machine on Apple's site but
not an 8 core.

No I think they tried to just run a lot of processes at once and they got the 8
core by just substituting the two dual cores with two quads.
Heh, yeah ;).


I haven't looked at Occam. I've been sort of interested in Alice

I used occam back in the eighties with ibm pcs and these 4 transputer plugin
cards. One of my bosses was Scottish MP and heavily into macro economic
modelling (also an inmos supporter). I seem to remember doing chaotic
gauss-seidel with parallel equation block solving, completely pointless as the
politicos just ignored any apparent results. Back of the envelope is good enough
for war and peace it seems.

Is suppose Alice isn't related to the "Alice Machine" which was a tagged pool
processor of some kind. I recall it being delivered just when prolog and the
like were going out of fashion and it never got faster than a z80 on a hot day.
 
P

Paul Rubin

Robin Becker said:
No I think they tried to just run a lot of processes at once and they
got the 8 core by just substituting the two dual cores with two quads.

Huh?! There are no quad core x86 cpu's as far as I know ;).
I used occam back in the eighties with ibm pcs and these 4 transputer
plugin cards. One of my bosses was Scottish MP and heavily into macro
economic modelling (also an inmos supporter). I seem to remember doing
chaotic gauss-seidel with parallel equation block solving, completely
pointless as the politicos just ignored any apparent results. Back of
the envelope is good enough for war and peace it seems.

Heh :). OK, yeah, I remember Occam now, it used CSP (communicating
sequential processes) for concurrency if I remember, sort of like
Erlang?
Is suppose Alice isn't related to the "Alice Machine" which was a
tagged pool processor of some kind. I recall it being delivered just
when prolog and the like were going out of fashion and it never got
faster than a z80 on a hot day.

No I don't think so. It's Standard ML with some concurrency extensions
and a really nice toolkit:

http://www.ps.uni-sb.de/alice/

Actually I'm not sure now whether it supports real multiprocessor
concurrency. It looks cool anyway.
 
M

mystilleef

I use D-Bus (Python). I recommend it. I don't know how cross platform
it is. However, it supports message passing of most built-in (strings,
ints, lists, dictionaries etc) Python objects accross processes. You
can mimick clean Erlang-like concurrency with it. It is the future of
IPC on Desktop Unix. Given Python's crippled threading implementation,
it can play a role in making your Python applications scalable, with
regards to concurrency. I am recommending D-Bus because I have used it,
and I know it works. I didn't read this of a newsgroup or mailing list.

http://www.freedesktop.org/wiki/Software/dbus
 
W

Wolfgang Keller

Are there any other concurrency options I've not discovered yet?

PyMPI?

Ironpython?

Sincerely,

Wolfgang Keller
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,073
Latest member
DarinCeden

Latest Threads

Top