My memory mapped file challenge for CD Writing

I

iksrazal

Hi all,

My problem: I want to burn a 1.2 gig file to CD, which obviously only
fits 700megs. So normally I would just use *nix split. However, I only
have at most 700 megs free hd space - split doubles the hd space
required. However, between swap and ram memory I have about 1 gig - and
I probably would have 700megs swap/memory free in runlevel 3. So I'm
thinking, why not just read the file via:

File file = new File("filename");

// Create a read-only memory-mapped file
FileChannel roChannel = new RandomAccessFile(file, "r").getChannel();
ByteBuffer roBuf = roChannel.map(FileChannel.MapMode.READ_ONLY, 0,
(int)roChannel.size());

And redirects its contents to stdout?

But then I got a few challenges. Please allow me to ask, as I could do
damage testing this:

1) Would changing the first iteration to this work?
ByteBuffer roBuf = roChannel.map(FileChannel.MapMode.READ_ONLY, 0,
650000000);

2) Second iteration:

ByteBuffer roBuf = roChannel.map(FileChannel.MapMode.READ_ONLY,
650000000, (roChannel.size() - 650000000));

3) Then the one I'm not sure about. How would I get the ByteBuffer to
stdout and call another program? I do the follwoing to output a big
file to a mysql shell:

Runtime rt = Runtime.getRuntime();
Process fRuntimeProcess = rt.exec(cmd);
FileInputStream fis = new FileInputStream(_scriptpath);
FileChannel fChan = fis.getChannel();
fChan.transferTo(0, fChan.size(),
Channels.newChannel(fRuntimeProcess.getOutputStream()));

Yet it requires FileChannel, not ByteBuffer. Hmm.

Its kind of screwy, I know, but it'd be kool if it worked!

Any ideas?
iksrazal
 
S

Stefan Schulz

Hi all,

My problem: I want to burn a 1.2 gig file to CD, which obviously only
fits 700megs. So normally I would just use *nix split. However, I only
have at most 700 megs free hd space - split doubles the hd space
required. However, between swap and ram memory I have about 1 gig - and
I probably would have 700megs swap/memory free in runlevel 3.

{... Idea ...]

Java is probably not the language you want to use for this kind of task.
However, look into tmpfs. This will allow you to abuse your gig of memory
as a temporary file system.
 
I

iksrazal

{... Idea ...]

Java is probably not the language you want to use for this kind of task.
However, look into tmpfs. This will allow you to abuse your gig of memory
as a temporary file system.

Indeed, I have:

/root> df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/hda3 8764 8116 203 98% /
tmpfs 252 1 252 1% /dev/shm

How do I non-programmatically use it? I believe it will expand into
swap after the limit - in this case 252 megs - is reached.

The other idea I have is using dd -count and mkfifo, like:

mkfifo tmp
dd if=bigfile bs=1024K count=650 > tmp &
cdrecord -data

I'd be curious to know if java supports FIFO files - I'd like to try
it.

iksrazal
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top