Extracting multiple zip files in a directory

L

Lorn

I've been working on this code somewhat succesfully, however I'm unable
to get it to iterate through all the zip files in the directory. As of
now it only extracts the first one it finds. If anyone could lend some
tips on how my iteration scheme should look, it would be hugely
appreciated. Here is what I have so far:

import zipfile, glob, os

from os.path import isfile
fname = filter(isfile, glob.glob('*.zip'))
for fname in fname:
zf =zipfile.ZipFile (fname, 'r')
for file in zf.namelist():
newFile = open ( file, "wb")
newFile.write (zf.read (file))
newFile.close()
zf.close()
 
J

John Machin

I've been working on this code somewhat succesfully, however I'm unable
to get it to iterate through all the zip files in the directory. As of
now it only extracts the first one it finds. If anyone could lend some
tips on how my iteration scheme should look, it would be hugely
appreciated. Here is what I have so far:

import zipfile, glob, os

from os.path import isfile
fname = filter(isfile, glob.glob('*.zip'))
for fname in fname:

Here's your main problem. See replacement below.
zf =zipfile.ZipFile (fname, 'r')
for file in zf.namelist():
newFile = open ( file, "wb")
newFile.write (zf.read (file))
newFile.close()
zf.close()

zipnames = filter(isfile, glob.glob('*.zip'))
for zipname in zipnames:
zf =zipfile.ZipFile (zipname, 'r')
for zfilename in zf.namelist(): # don't shadow the "file" builtin
newFile = open ( zfilename, "wb")
newFile.write (zf.read (zfilename))
newFile.close()
zf.close()

Instead of filter, consider:

zipnames = [x for x in glob.glob('*.zip') if isfile(x)]

Cheers,
John
 
L

Lorn

Thanks John, this works great!

Was wondering what your reasoning is behind replacing "filter" with the
x for x statement?

Appreciate the help, thanks again.

Lorn
 
J

John Machin

Was wondering what your reasoning is behind replacing "filter" with the
x for x statement?

map, filter, and reduce tend to be deprecated in some quarters since
list comprehensions came in [and fiercely defended in other quarters].
I'm just evangelising :)
 
L

Lorn

Ok, I probably should have seen this coming. Working with small zip
files is no problem with the above script. However, when opening a 120+
MB compressed file that uncompresses to over 1GB, I unfortunately get
memory errors. Is this because python is holding the extracted file in
memory, as opposed to spooling it to disk, before writing? Does anyone
know any way around this... as of now, I'm out of ideas :( ??
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top