paralell ftp uploads and pool size

Discussion in 'Python' started by ben@bbarker.co.uk, Jan 9, 2013.

  1. Guest

    Hello,

    I have a python script that uploads multiple files from the local machine to a remote server in parallel via ftp using p process pool:

    p = Pool(processes=x)

    Now as I increase the value of x, the overall upload time for all files drops as expected. If I set x too high however, then an exception is thrown. The exact value at which this happens varies, but is ~20

    Traceback (most recent call last):
    File "uploadFTP.py", line 59, in <module>
    FTP_Upload().multiupload()
    File "uploadFTP.py", line 56, in multiupload
    p.map(upload_function,files)
    File "/usr/lib64/python2.6/multiprocessing/pool.py", line 148, in map
    return self.map_async(func, iterable, chunksize).get()
    File "/usr/lib64/python2.6/multiprocessing/pool.py", line 422, in get
    raise self._value
    EOFError

    Now this is not a problem - 20 is more than enough - but I'm trying to understand the mechanisms involved, and why the exact number of processes at which this exception occurs seems to vary.

    I guess it comes down to the current resources of the server itself...but any insight would be much appreciated!
    , Jan 9, 2013
    #1
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. moo moo
    Replies:
    1
    Views:
    1,231
    Gordon Beaton
    Jan 12, 2005
  2. jim arnold
    Replies:
    1
    Views:
    447
    spaghetti
    Mar 5, 2004
  3. jobs
    Replies:
    1
    Views:
    5,186
    bruce barker
    Nov 10, 2007
  4. Rick Lawson
    Replies:
    8
    Views:
    786
    Graham Dumpleton
    Jul 17, 2009
  5. D. Buck
    Replies:
    2
    Views:
    464
    D. Buck
    Jun 29, 2004
Loading...

Share This Page