reading from pipe, writing to a file

  • Thread starter enjoylife_95135
  • Start date
E

enjoylife_95135

Hi,
I am reading a giant chunk of data from a MySQL database, which I'd
like to write into a file so I can process it later.

I'd like to use File::Copy, but it doesn't work. However, I can use
good old print to write to the file though.

Can someone help?

Thanks,
EL
#!/usr/bin/perl
use File::Copy;

$file="/tmp/file.$$";

open(FILE, "+>$file") || die("can't open file: $!");
open(PIPE, "/usr/bin/mysql -e \"use database; select * from filedb
where attrib > 1146567;\" |") || die("can't open pipe: $!");

# This results in a zero byte file being created.
copy(PIPE, FILE);

# But this works well when uncommented.
# foreach my $line (<PIPE>) {
# print FILE "$line";
# }
close (FILE);
close (PIPE);
exit;
 
J

John W. Krahn

I am reading a giant chunk of data from a MySQL database, which I'd
like to write into a file so I can process it later.

I'd like to use File::Copy, but it doesn't work. However, I can use
good old print to write to the file though.

Can someone help?

#!/usr/bin/perl
use File::Copy;

$file="/tmp/file.$$";

open(FILE, "+>$file") || die("can't open file: $!");
open(PIPE, "/usr/bin/mysql -e \"use database; select * from filedb
where attrib > 1146567;\" |") || die("can't open pipe: $!");

# This results in a zero byte file being created.

That is because open(FILE, "+>$file") creates a zero byte file.

copy(PIPE, FILE);

perldoc File::Copy
[snip]
DESCRIPTION
The File::Copy module provides two basic functions, "copy" and "move",
which are useful for getting the contents of a file from one place to
another.

· The "copy" function takes two parameters: a file to copy from and a
file to copy to. Either argument may be a string, a FileHandle
reference or a FileHandle glob. Obviously, if the first argument is
a filehandle of some sort, it will be read from, and if it is a
file name it will be opened for reading. Likewise, the second
argument will be written to (and created if need be). Trying to
copy a file on top of itself is a fatal error.

Note that passing in files as handles instead of names may lead to
loss of information on some operating systems; it is recommended
that you use file names whenever possible. Files are opened in
binary mode where applicable. To get a consistent behaviour when
copying from a filehandle to a file, use "binmode" on the
filehandle.


So you need to use either a FileHandle reference:

copy( \*PIPE, \*FILE ) or die "Cannot copy '$file' $!";

Or a FileHandle glob:

copy( *PIPE, *FILE ) or die "Cannot copy '$file' $!";



John
 
T

Ted Zlatanov

I am reading a giant chunk of data from a MySQL database, which I'd
like to write into a file so I can process it later.

I'd like to use File::Copy, but it doesn't work. However, I can use
good old print to write to the file though.

# But this works well when uncommented.
# foreach my $line (<PIPE>) {
# print FILE "$line";
# }
close (FILE);
close (PIPE);
exit;

I'm not sure why you don't want to use the simple commented loop.
It's fast and easy to understand, and it takes no extra memory...
Without $line it's even simpler and faster:

foreach (<PIPE>)
{
print FILE $_;
}

If you must use modules, look at the IO modules (IO::File, IO::Handle,
etc.) for an easy OO solution. If the data is binary, you may want to
use read/write and look at "perldoc -q binary" instead of the
line-oriented solution.

Ted
 
E

enjoylife_95135

File::Copy took about 30% of the time versus looping to copy the file.
Awesome.

I'll check out the other modules Ted Z suggested as well.

Thanks to everyone for the help!
I am reading a giant chunk of data from a MySQL database, which I'd
like to write into a file so I can process it later.

I'd like to use File::Copy, but it doesn't work. However, I can use
good old print to write to the file though.

Can someone help?

#!/usr/bin/perl
use File::Copy;

$file="/tmp/file.$$";

open(FILE, "+>$file") || die("can't open file: $!");
open(PIPE, "/usr/bin/mysql -e \"use database; select * from filedb
where attrib > 1146567;\" |") || die("can't open pipe: $!");

# This results in a zero byte file being created.

That is because open(FILE, "+>$file") creates a zero byte file.

copy(PIPE, FILE);

perldoc File::Copy
[snip]
DESCRIPTION
The File::Copy module provides two basic functions, "copy" and "move",
which are useful for getting the contents of a file from one placeto
another.

· The "copy" function takes two parameters: a file to copy from and a
file to copy to. Either argument may be a string, a FileHandle
reference or a FileHandle glob. Obviously, if the first argument is
a filehandle of some sort, it will be read from, and if it is a
file name it will be opened for reading. Likewise, the second
argument will be written to (and created if need be). Trying to
copy a file on top of itself is a fatal error.

Note that passing in files as handles instead of names may lead to
loss of information on some operating systems; it is recommended
that you use file names whenever possible. Files are opened in
binary mode where applicable. To get a consistent behaviour when
copying from a filehandle to a file, use "binmode" on the
filehandle.


So you need to use either a FileHandle reference:

copy( \*PIPE, \*FILE ) or die "Cannot copy '$file' $!";

Or a FileHandle glob:

copy( *PIPE, *FILE ) or die "Cannot copy '$file' $!";



John
 
U

Uri Guttman

e9> File::Copy took about 30% of the time versus looping to copy the file.
e9> Awesome.

try file::slurp too. read_file() can take a handle so you could do this:

write_file( 'filename', read_file( $pipe ) ) ;

it could be faster than file copy as it is optimized for speed.

uri
 
X

xhoster

Ted Zlatanov said:
I'm not sure why you don't want to use the simple commented loop.
It's fast and easy to understand, and it takes no extra memory...
Without $line it's even simpler and faster:

foreach (<PIPE>)
{
print FILE $_;
}

Actually, it takes a lot of extra memory. The entire contents of PIPE
are read into memory.

As he says the data set is huge, I'd guess this i undesirable.

print FILE $_ while (<PIPE>);

Xho
 
T

Ted Zlatanov

Actually, it takes a lot of extra memory. The entire contents of PIPE
are read into memory.

As he says the data set is huge, I'd guess this i undesirable.

print FILE $_ while (<PIPE>);

You're right, I was thinking of a while() but wrote the foreach() loop
:)

Ted
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top