paralell ftp uploads and pool size




I have a python script that uploads multiple files from the local machine to a remote server in parallel via ftp using p process pool:

p = Pool(processes=x)

Now as I increase the value of x, the overall upload time for all files drops as expected. If I set x too high however, then an exception is thrown. The exact value at which this happens varies, but is ~20

Traceback (most recent call last):
File "", line 59, in <module>
File "", line 56, in multiupload,files)
File "/usr/lib64/python2.6/multiprocessing/", line 148, in map
return self.map_async(func, iterable, chunksize).get()
File "/usr/lib64/python2.6/multiprocessing/", line 422, in get
raise self._value

Now this is not a problem - 20 is more than enough - but I'm trying to understand the mechanisms involved, and why the exact number of processes at which this exception occurs seems to vary.

I guess it comes down to the current resources of the server itself...but any insight would be much appreciated!


Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question