Poor python and/or Zope performance on Sparc

J

joa2212

Hello everybody,

I'm posting this message because I'm quiet frustrated.

We just bought a software from a small software vendor. In the
beginning he hosted our application on a small server at his office. I
think it was a Fujitsu-Siemens x86 running debian Linux. The
performance of the DSL-Line was very poor, so we decided to buy an own
machine to host the application ourselves.

The application is based on the Zope Application server (2.8.8-final
w/ python 2.3.6) along with many other packages like ghostview,
postgres, freetype, python imaging lib, etc... I once saw the
application running at the office of my software vendor and it was
running very well.

So I thought that - when the bottleneck of poor DSL performance has
disappeared - the software will run as fast as at his office. But I
erred. The performance is more than poor.

We have a Sun T1000 with 8 cores and 8 GB of RAM. First, I installed
Solaris 10 because I know this OS better than Debian Linux. Result:
poor Performance. After that I decided to migrate to debian:

root@carmanager > uname -a
Linux carmanager 2.6.18-5-sparc64-smp #1 SMP Wed Oct 3 04:16:38 UTC
2007 sparc64 GNU/Linux

Result: Almost even worse. The application is not scaling at all.
Every time you start a request it is hanging around on one cpu and is
devouring it at about 90-100% load. The other 31 CPUs which are shown
in "mpstat" are bored at 0% load.

If anybody needs further information about installed packages, I'll
post it here.

Any hints are appreciated!
Thanks, Joe.

PS: Fortunately the Sun is not bought yet. It's a "try&buy" from my
local dealer. So if there are any hints like "buy a new machine
because Sun is crap" I will _not_ refuse obediance.
 
G

George Sakkis

Result: Almost even worse. The application is not scaling at all.
Every time you start a request it is hanging around on one cpu and is
devouring it at about 90-100% load. The other 31 CPUs which are shown
in "mpstat" are bored at 0% load.

You are probably not aware of Python's Global Interpeter Lock:
http://docs.python.org/api/threads.html.

George
 
J

joa2212

You are probably not aware of Python's Global Interpeter Lock:http://docs.python.org/api/threads.html.

George

Hi George,

yes, that's right. I wasn't aware of this. If I understand you
correctly then we have a problem of implementing threads in the
software we are using. But tell me one thing: Why does this software
almost runs like a fool on an Intel machine with a single cpu (perhaps
dual core?) and slow like a snail when it runs on Sparc? It's exactly
the same source code on both platforms. Certainly all relevant
packages (python + zope) were recompiled on the sparc.

Sorry for my questions, I'm really no software developer. I'm just a
little bit helpless because my software vendor can't tell my anything
about by concerns.

Joe.
 
I

Ivan Voras

joa2212 said:
We have a Sun T1000 with 8 cores and 8 GB of RAM. First, I installed
Solaris 10 because I know this OS better than Debian Linux. Result:
poor Performance. After that I decided to migrate to debian:

Do you know the architecture of this machine? It's extremely streamlined
for data throughput (IO) at the expense of computational ability. In
particular:

- it runs at a relatively small clock speed (1 GHz - 1.4 GHz)
- it's terrible for floating-point calculations because there is only
one FPU shared by all 32 logical processors

While it will be screamingly fast for serving static content, and pretty
decent for light database jobs, this not an optimal platform for dynamic
web applications, especially for Python since the language's so dynamic
and it doesn't support SMP. Since it's so optimized for a certain
purpose, you can freely consider it a special-purpose machine, rather
than a general-purpose one.

Even if you manage to get Zope to spawn parallel request handlers
(probably via something like fastcgi), if the web application is
CPU-intensive you won't be happy with its performance (for CPU-intensive
tasks you probably don't want to spawn more than 8 handlers, since
that's the number of physical cores in the CPU).



-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHLSB+ldnAQVacBcgRAhHTAKCzXw8NvXmpQuoNhl/xVDKlSFDyBQCgjBe0
hSM1O5T5E3dQSdHrF/yhD4A=
=o0GD
-----END PGP SIGNATURE-----
 
P

pythoncurious

Result: Almost even worse. The application is not scaling at all.
Every time you start a request it is hanging around on one cpu and is
devouring it at about 90-100% load. The other 31 CPUs which are shown
in "mpstat" are bored at 0% load.

Like others already pointed out: Performance is going to suffer if
you
run something that's single-threaded on a T1000.

We've been using the T1000 for a while (but not for python) and it's
actually really fast. The performance is way better than the 8 cores
imply. In fact, it'll do work that's not too far from 32 times the
work
of a single thread.

Of course, it all depends on what your application is like, but try
experimenting a bit with parallelism. Try changing from 4 up to 64
parallel
working processes to see what kind of effects you'll get.

I'm guessing that whatever caching of persistent data you can do will
be a good thing as well as the disk might end up working really hard.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,534
Members
45,008
Latest member
Rahul737

Latest Threads

Top