Re : Copying very big pdf files

R

Richard Sandoz

Is it just me or is "NIO's capabilities" just a mythical perception?

(1) I increased the buffer size from a measily 1024 to 32768
(2) I added a third case (good old stream copy)
(3) I threw it in a loop for testing repetition (showing average times
of all three test cases)

I tried this with a 2.2K class file and found the "efficient"
transferTo method to yield the slowest result:
transferTo: 7.5
channel buffer copy: 6.5
stream buffer copy: 6.7

The results still varied to much to prove conclusive.

I then ran a 4.3M pdf file and came up with results proving my case
further (transferTo was slower by a factor of 5!!!???):
Time(81) : 713.58 : 122.667 : 165.938
transferTo: 713.58
channel buffer copy: 122.667
stream buffer copy: 165.938

I then ran a 49M zip file and came up with different results
(transferTo was still the slowest, but not by as large a multiple):
Time(9) : 14,651 : 11,162.556 : 12,906.556
transferTo: 14,651
channel buffer copy: 11,162.556
stream buffer copy: 12,906.556

<CODE>
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.File;
import java.text.NumberFormat;

class CopyFile {
final static boolean DIRECT = true;
final static int BUFFER_SIZE = 32768;

public static void main(String[] args)
throws IOException
{
if (args.length != 1) {
System.out.println("Usage: java CopyFile src");
System.exit(0);
}
NumberFormat pattern = NumberFormat.getInstance();
pattern.setMaximumFractionDigits(3);

double time[] = new double[3];
for (int i = 0;i < 10000; i++) {
for (int j = 0; j < 3; j++) {
long start = System.currentTimeMillis();
String dest = args[0] + "." + i + "." + j;

switch (j) {
case 0: {
FileChannel in = new
FileInputStream(args[0]).getChannel();
FileChannel out = new FileOutputStream(dest).getChannel();
in.transferTo(0, in.size(), out);
out.close();
in.close();
break; }
case 1: {
FileChannel in = new
FileInputStream(args[0]).getChannel();
FileChannel out = new FileOutputStream(dest).getChannel();
ByteBuffer buffer = DIRECT
? ByteBuffer.allocateDirect(BUFFER_SIZE)
: ByteBuffer.allocate(BUFFER_SIZE);
int len;
while ((len = in.read(buffer)) > -1) {
buffer.flip();
out.write(buffer);
buffer.clear();
}
out.close();
in.close();
break; }
case 2: {
byte[] buf = new byte[BUFFER_SIZE];
int len;
FileInputStream in = new FileInputStream(args[0]);
FileOutputStream out = new FileOutputStream(dest);
while ( ( len = in.read( buf ) ) > 0 )
out.write( buf, 0, len );
out.close();
in.close();
break; }
}
new File(dest).delete();


long span = System.currentTimeMillis() - start;
time[j] = time[j] + span;
}
System.out.println("Time(" + (i+1) + ") : "
+ pattern.format(time[0]/(i+1)) + " : "
+ pattern.format(time[1]/(i+1)) + " : "
+ pattern.format(time[2]/(i+1)));
}
}
}
 
R

Roedy Green

Is it just me or is "NIO's capabilities" just a mythical perception?

(1) I increased the buffer size from a measily 1024 to 32768
(2) I added a third case (good old stream copy)
(3) I threw it in a loop for testing repetition (showing average times
of all three test cases)

Try the simple minded FileTransfer class that just uses a single byte
buffer and reads in chunks as byte stream. One thing you have to be
careful is not to keep allocating new giant buffers. Use your old
one. Otherwise you stress the gc.
 
R

rkm

Try changing the order of your testing to see if the first
method is doing all the work of getting the system to load
its buffers. If true, then the 2nd and 3rd methods are
"drafting" off the work of the first method.

The reason I say this is because I wrote an nio program that
loads a 600M file for fast searching, and even though the
program exits between each run, subsequent runs always out
perform the first run. My explanation is the system has
some portions of the file mapped to memory buffers and the
2nd and later runs are benefitting from that. Your test
could be demonstrating the same thing.

Rick

Richard said:
Is it just me or is "NIO's capabilities" just a mythical perception?

(1) I increased the buffer size from a measily 1024 to 32768
(2) I added a third case (good old stream copy)
(3) I threw it in a loop for testing repetition (showing average times
of all three test cases)

I tried this with a 2.2K class file and found the "efficient"
transferTo method to yield the slowest result:
transferTo: 7.5
channel buffer copy: 6.5
stream buffer copy: 6.7

The results still varied to much to prove conclusive.

I then ran a 4.3M pdf file and came up with results proving my case
further (transferTo was slower by a factor of 5!!!???):
Time(81) : 713.58 : 122.667 : 165.938
transferTo: 713.58
channel buffer copy: 122.667
stream buffer copy: 165.938

I then ran a 49M zip file and came up with different results
(transferTo was still the slowest, but not by as large a multiple):
Time(9) : 14,651 : 11,162.556 : 12,906.556
transferTo: 14,651
channel buffer copy: 11,162.556
stream buffer copy: 12,906.556

<CODE>
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.File;
import java.text.NumberFormat;

class CopyFile {
final static boolean DIRECT = true;
final static int BUFFER_SIZE = 32768;

public static void main(String[] args)
throws IOException
{
if (args.length != 1) {
System.out.println("Usage: java CopyFile src");
System.exit(0);
}
NumberFormat pattern = NumberFormat.getInstance();
pattern.setMaximumFractionDigits(3);

double time[] = new double[3];
for (int i = 0;i < 10000; i++) {
for (int j = 0; j < 3; j++) {
long start = System.currentTimeMillis();
String dest = args[0] + "." + i + "." + j;

switch (j) {
case 0: {
FileChannel in = new
FileInputStream(args[0]).getChannel();
FileChannel out = new FileOutputStream(dest).getChannel();
in.transferTo(0, in.size(), out);
out.close();
in.close();
break; }
case 1: {
FileChannel in = new
FileInputStream(args[0]).getChannel();
FileChannel out = new FileOutputStream(dest).getChannel();
ByteBuffer buffer = DIRECT
? ByteBuffer.allocateDirect(BUFFER_SIZE)
: ByteBuffer.allocate(BUFFER_SIZE);
int len;
while ((len = in.read(buffer)) > -1) {
buffer.flip();
out.write(buffer);
buffer.clear();
}
out.close();
in.close();
break; }
case 2: {
byte[] buf = new byte[BUFFER_SIZE];
int len;
FileInputStream in = new FileInputStream(args[0]);
FileOutputStream out = new FileOutputStream(dest);
while ( ( len = in.read( buf ) ) > 0 )
out.write( buf, 0, len );
out.close();
in.close();
break; }
}
new File(dest).delete();


long span = System.currentTimeMillis() - start;
time[j] = time[j] + span;
}
System.out.println("Time(" + (i+1) + ") : "
+ pattern.format(time[0]/(i+1)) + " : "
+ pattern.format(time[1]/(i+1)) + " : "
+ pattern.format(time[2]/(i+1)));
}
}
}
Have you tried using NIO's capabilities? Something like:

// Program: FastCopyFile.java
...
If this doesn't work, perhaps you need to assign a larger heap-limit
to the JVM with the -Xmx option?

Craig
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,050
Latest member
AngelS122

Latest Threads

Top