creating size-limited tar files

A

andrea crotti

Simple problem, given a lot of data in many files/directories, I
should create a tar file splitted in chunks <= a given size.

The simplest way would be to compress the whole thing and then split.

At the moment the actual script which I'm replacing is doing a
"system('split..')", which is not that great, so I would like to do it
while compressing.

So I thought about (in pseudocode)


while remaining_files:
tar_file.addfile(remaining_files.pop())
if size(tar_file) >= limit:
close(tar_file)
tar_file = new_tar_file()

which might work maybe, but how do I get the current size? There
should be tarinfo.size but it doesn't exist on a TarFile opened in
write mode, so should I do a stat after each flush?

Any other better ideas otherwise?
thanks
 
N

Neil Cerutti

Simple problem, given a lot of data in many files/directories, I
should create a tar file splitted in chunks <= a given size.

The simplest way would be to compress the whole thing and then split.

At the moment the actual script which I'm replacing is doing a
"system('split..')", which is not that great, so I would like to do it
while compressing.

So I thought about (in pseudocode)

while remaining_files:
tar_file.addfile(remaining_files.pop())
if size(tar_file) >= limit:
close(tar_file)
tar_file = new_tar_file()

I have not used this module before, but what you seem to be
asking about is:

TarFile.gettarinfo().size

But your algorithm stops after the file is already too big.
 
A

Alexander Blinne

I don't know the best way to find the current size, I only have a
general remark.
This solution is not so good if you have to impose a hard limit on the
resulting file size. You could end up having a tar file of size "limit +
size of biggest file - 1 + overhead" in the worst case if the tar is at
limit - 1 and the next file is the biggest file. Of course that may be
acceptable in many cases or it may be acceptable to do something about
it by adjusting the limit.

My Idea:
Assuming tar_file works on some object with a file-like interface one
could implement a "transparent splitting file" class which would have to
use some kind of buffering mechanism. It would represent a virtual big
file that is stored in many pieces of fixed size (except the last) and
would allow you to just add all files to one tar_file and have it split
up transparently by the underlying file-object, something like

tar_file = TarFile(SplittingFile(names='archiv.tar-%03d', chunksize=
chunksize, mode='wb'))
while remaining_files:
tar_file.addfile(remaining_files.pop())

and the splitting_file would automatically create chunks with size
chunksize and filenames archiv.tar-001, archiv.tar-002, ...

The same class could be used to put it back together, it may even
implement transparent seeking over a set of pieces of a big file. I
would like to have such a class around for general usage.

greetings
 
R

Roy Smith

Alexander Blinne said:
I don't know the best way to find the current size, I only have a
general remark.
This solution is not so good if you have to impose a hard limit on the
resulting file size. You could end up having a tar file of size "limit +
size of biggest file - 1 + overhead" in the worst case if the tar is at
limit - 1 and the next file is the biggest file. Of course that may be
acceptable in many cases or it may be acceptable to do something about
it by adjusting the limit.

If you truly have a hard limit, one possible solution would be to use
tell() to checkpoint the growing archive after each addition. If adding
a new file unexpectedly causes you exceed your hard limit, you can
seek() back to the previous spot and truncate the file there.

Whether this is worth the effort is an exercise left for the reader.
 
A

Andrea Crotti

If you truly have a hard limit, one possible solution would be to use
tell() to checkpoint the growing archive after each addition. If adding
a new file unexpectedly causes you exceed your hard limit, you can
seek() back to the previous spot and truncate the file there.

Whether this is worth the effort is an exercise left for the reader.

So I'm not sure if it's an hard limit or not, but I'll check tomorrow.
But in general for the size I could also take the size of the files and
simply estimate the size of all of them,
pushing as many as they should fit in a tarfile.
With compression I might get a much smaller file maybe, but it would be
much easier..

But the other problem is that at the moment the people that get our
chunks reassemble the file with a simple:

cat file1.tar.gz file2.tar.gz > file.tar.gz

which I suppose is not going to work if I create 2 different tar files,
since it would recreate the header in all of the them, right?
So or I give also a script to reassemble everything or I have to split
in a more "brutal" way..

Maybe after all doing the final split was not too bad, I'll first check
if it's actually more expensive for the filesystem (which is very very slow)
or it's not a big deal...
 
O

Oscar Benjamin

But the other problem is that at the moment the people that get our chunks
reassemble the file with a simple:

cat file1.tar.gz file2.tar.gz > file.tar.gz

which I suppose is not going to work if I create 2 different tar files,
since it would recreate the header in all of the them, right?

Correct. But if you read the rest of Alexander's post you'll find a
suggestion that would work in this case and that can guarantee to give
files of the desired size.

You just need to define your own class that implements a write()
method and then distributes any data it receives to separate files.
You can then pass this as the fileobj argument to the tarfile.open
function:
http://docs.python.org/2/library/tarfile.html#tarfile.open


Oscar
 
A

andrea crotti

2012/11/7 Oscar Benjamin said:
Correct. But if you read the rest of Alexander's post you'll find a
suggestion that would work in this case and that can guarantee to give
files of the desired size.

You just need to define your own class that implements a write()
method and then distributes any data it receives to separate files.
You can then pass this as the fileobj argument to the tarfile.open
function:
http://docs.python.org/2/library/tarfile.html#tarfile.open


Oscar



Yes yes I saw the answer, but now I was thinking that what I need is
simply this:
tar czpvf - /path/to/archive | split -d -b 100M - tardisk

since it should run only on Linux it's probably way easier, my script
will then only need to create the list of files to tar..

The only doubt is if this is more or less reliably then doing it in
Python, when can this fail with some bad broken pipe?
(the filesystem is not very good as I said and it's mounted with NFS)
 
A

andrea crotti

2012/11/8 andrea crotti said:
Yes yes I saw the answer, but now I was thinking that what I need is
simply this:
tar czpvf - /path/to/archive | split -d -b 100M - tardisk

since it should run only on Linux it's probably way easier, my script
will then only need to create the list of files to tar..

The only doubt is if this is more or less reliably then doing it in
Python, when can this fail with some bad broken pipe?
(the filesystem is not very good as I said and it's mounted with NFS)

In the meanwhile I tried a couple of things, and using the pipe on
Linux actually works very nicely, it's even faster than simple tar for
some reasons..

[andrea@andreacrotti isos]$ time tar czpvf - file1.avi file2.avi |
split -d -b 1000M - inchunks
file1.avi
file2.avi

real 1m39.242s
user 1m14.415s
sys 0m7.140s

[andrea@andreacrotti isos]$ time tar czpvf total.tar.gz file1.avi file2.avi
file1.avi
file2.avi

real 1m41.190s
user 1m13.849s
sys 0m5.723s

[andrea@andreacrotti isos]$ time split -d -b 1000M total.tar.gz inchunks

real 0m55.282s
user 0m0.020s
sys 0m3.553s
 
A

andrea crotti

Anyway in the meanwhile I implemented this tar and split in this way below.
It works very well and it's probably much faster, but the downside is that
I give away control to tar and split..

def tar_and_split(inputfile, output, bytes_size=None):
"""Take the file containing all the files to compress, the bytes
desired for the split and the base name of the output file
"""
# cleanup first
for fname in glob(output + "*"):
logger.debug("Removing old file %s" % fname)
remove(fname)

out = '-' if bytes_size else (output + '.tar.gz')
cmd = "tar czpf {} $(cat {})".format(out, inputfile)
if bytes_size:
cmd += "| split -b {} -d - {}".format(bytes_size, output)

logger.info("Running command %s" % cmd)

proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = proc.communicate()
if err:
logger.error("Got error messages %s" % err)

logger.info("Output %s" % out)

if proc.returncode != 0:
logger.error("Something failed running %s, need to re-run" % cmd)
return False
 
A

andrea crotti

2012/11/9 andrea crotti said:
Anyway in the meanwhile I implemented this tar and split in this way below.
It works very well and it's probably much faster, but the downside is that
I give away control to tar and split..

def tar_and_split(inputfile, output, bytes_size=None):
"""Take the file containing all the files to compress, the bytes
desired for the split and the base name of the output file
"""
# cleanup first
for fname in glob(output + "*"):
logger.debug("Removing old file %s" % fname)
remove(fname)

out = '-' if bytes_size else (output + '.tar.gz')
cmd = "tar czpf {} $(cat {})".format(out, inputfile)
if bytes_size:
cmd += "| split -b {} -d - {}".format(bytes_size, output)

logger.info("Running command %s" % cmd)

proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = proc.communicate()
if err:
logger.error("Got error messages %s" % err)

logger.info("Output %s" % out)

if proc.returncode != 0:
logger.error("Something failed running %s, need to re-run" % cmd)
return False


There is another problem with this solution, if I run something like
this with Popen:
cmd = "tar {bigc} -czpf - --files-from {inputfile} | split -b
{bytes_size} -d - {output}"

proc = subprocess.Popen(to_run, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)

the proc.returncode will only be the one from "split", so I lose the
ability to check if tar failed..

A solution would be something like this:
{ ls -dlkfjdsl; echo $? > tar.status; } | split

but it's a bit ugly. I wonder if I can use the subprocess PIPEs to do
the same thing, is it going to be as fast and work in the same way??
 
I

Ian Kelly

but it's a bit ugly. I wonder if I can use the subprocess PIPEs to do
the same thing, is it going to be as fast and work in the same way??

It'll look something like this:
0

Note that there's a subtle potential for deadlock here. During the
p1.communicate() call, if the p2 output buffer fills up, then it will
stop accepting input from p1 until p2.communicate() can be called, and
then if that buffer also fills up, p1 will hang. Additionally, if p2
needs to wait on the parent process for some reason, then you end up
effectively serializing the two processes.

Solution would be to poll all the open-ended pipes in a select() loop
instead of using communicate(), or perhaps make the two communicate
calls simultaneously in separate threads.
 
I

Ian Kelly

It'll look something like this:

0

Note that there's a subtle potential for deadlock here. During the
p1.communicate() call, if the p2 output buffer fills up, then it will
stop accepting input from p1 until p2.communicate() can be called, and
then if that buffer also fills up, p1 will hang. Additionally, if p2
needs to wait on the parent process for some reason, then you end up
effectively serializing the two processes.

Solution would be to poll all the open-ended pipes in a select() loop
instead of using communicate(), or perhaps make the two communicate
calls simultaneously in separate threads.

Sorry, the example I gave above is wrong. If you're calling
p1.communicate(), then you need to first remove the p1.stdout pipe
from the Popen object. Otherwise, the communicate() call will try to
read data from it and may "steal" input from p2. It should look more
like this:
 
I

Ian Kelly

Sorry, the example I gave above is wrong. If you're calling
p1.communicate(), then you need to first remove the p1.stdout pipe
from the Popen object. Otherwise, the communicate() call will try to
read data from it and may "steal" input from p2. It should look more
like this:

Per the docs, that third line should be "p1.stdout.close()". :p
 
K

Kushal Kumaran

Ian Kelly said:
It'll look something like this:

0

Note that there's a subtle potential for deadlock here. During the
p1.communicate() call, if the p2 output buffer fills up, then it will
stop accepting input from p1 until p2.communicate() can be called, and
then if that buffer also fills up, p1 will hang. Additionally, if p2
needs to wait on the parent process for some reason, then you end up
effectively serializing the two processes.

Solution would be to poll all the open-ended pipes in a select() loop
instead of using communicate(), or perhaps make the two communicate
calls simultaneously in separate threads.

Or, you could just change the p1's stderr to an io.BytesIO instance.
Then call p2.communicate *first*.
 
I

Ian Kelly

Or, you could just change the p1's stderr to an io.BytesIO instance.
Then call p2.communicate *first*.

This doesn't seem to work.
b = io.BytesIO()
p = subprocess.Popen(["ls", "-l"], stdout=b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python3.2/subprocess.py", line 711, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "/usr/lib64/python3.2/subprocess.py", line 1112, in _get_handles
c2pwrite = stdout.fileno()
io.UnsupportedOperation: fileno

I think stdout and stderr need to be actual file objects, not just
file-like objects.
 
K

Kushal Kumaran

Ian Kelly said:
Or, you could just change the p1's stderr to an io.BytesIO instance.
Then call p2.communicate *first*.

This doesn't seem to work.
b = io.BytesIO()
p = subprocess.Popen(["ls", "-l"], stdout=b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python3.2/subprocess.py", line 711, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "/usr/lib64/python3.2/subprocess.py", line 1112, in _get_handles
c2pwrite = stdout.fileno()
io.UnsupportedOperation: fileno

I think stdout and stderr need to be actual file objects, not just
file-like objects.

Well, well, I was wrong, clearly. I wonder if this is fixable.
 
A

andrea crotti

2012/11/14 Kushal Kumaran said:
Well, well, I was wrong, clearly. I wonder if this is fixable.

But would it not be possible to use the pipe in memory in theory?
That would be way faster and since I have in theory enough RAM it
might be a great improvement..
 
A

andrea crotti

Ok this is all very nice, but:

[andrea@andreacrotti tar_baller]$ time python2 test_pipe.py > /dev/null

real 0m21.215s
user 0m0.750s
sys 0m1.703s

[andrea@andreacrotti tar_baller]$ time ls -lR /home/andrea | cat > /dev/null

real 0m0.986s
user 0m0.413s
sys 0m0.600s


where test_pipe.py is:
from subprocess import PIPE, Popen

# check if doing the pipe with subprocess and with the | is the same or not

pipe_file = open('pipefile', 'w')


p1 = Popen('ls -lR /home/andrea', shell=True, stdout=PIPE, stderr=PIPE)
p2 = Popen('cat', shell=True, stdin=p1.stdout, stdout=PIPE, stderr=PIPE)
p1.stdout.close()

print(p2.stdout.read())


So apparently it's way slower than using this system, is this normal?
 
D

Dave Angel

Ok this is all very nice, but:

[andrea@andreacrotti tar_baller]$ time python2 test_pipe.py > /dev/null

real 0m21.215s
user 0m0.750s
sys 0m1.703s

[andrea@andreacrotti tar_baller]$ time ls -lR /home/andrea | cat > /dev/null

real 0m0.986s
user 0m0.413s
sys 0m0.600s

<snip>


So apparently it's way slower than using this system, is this normal?

I'm not sure how this timing relates to the thread, but what it mainly
shows is that starting up the Python interpreter takes quite a while,
compared to not starting it up.
 
A

andrea crotti

2012/11/14 Dave Angel said:
Ok this is all very nice, but:

[andrea@andreacrotti tar_baller]$ time python2 test_pipe.py > /dev/null

real 0m21.215s
user 0m0.750s
sys 0m1.703s

[andrea@andreacrotti tar_baller]$ time ls -lR /home/andrea | cat > /dev/null

real 0m0.986s
user 0m0.413s
sys 0m0.600s

<snip>


So apparently it's way slower than using this system, is this normal?

I'm not sure how this timing relates to the thread, but what it mainly
shows is that starting up the Python interpreter takes quite a while,
compared to not starting it up.


Well it's related because my program has to be as fast as possible, so
in theory I thought that using Python pipes would be better because I
can get easily the PID of the first process.

But if it's so slow than it's not worth, and I don't think is the
Python interpreter because it's more or less constantly many times
slower even changing the size of the input..
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top