Roedy Green said:
Do the I/O in one fell swoop unbuffered. See
http://mindprod.com/fileio.html
I do this all the time and it is very fast.
I hope you are not reading it C-style char by char with an unbuffered
reader. THAT would be slow.
Use indexOf to find the \n and substring to extract the line.
Finallyi compile with Jet. See
http://mindprod.com/jgloss/jet.html
The alternative for very large files is readline using a
BufferedReader with a whacking huge buffer. For details see
http://mindprod.com/fileio.html
See
http://mindprod.com/jgloss/buffer.html
for hints on selecting optimum buffer sizes.
Thanks to you & Darryl for all your help.
In Java I try to avoid doing any loops of single byte/char operations.
Below is some code (not mine) that's similar my current way of loading
data from my JAR.
InputStream.available() happens to return a # that is the size of the
entire resource on the phones I'm currently working with but I'm
worried that may not always be the case.
Is there a way of getting the overall resource size from an
InputStream() or a JAR resource ?
On the J2ME PC emulator I get back 0 from the InputStream.available()
method so I also have to include code (not included below) to do
looped single byte/char reads.
There seems to be a lot of J2ME sample code with single byte/char
operation loops when there should be faster ways.
I also wonder if it's faster to read in one big chunk & then loop to
do parsing then to loop with small reads & parse at the same time.
- Sam.
private byte[] getPic()
{
InputStream iStream = getClass().getResourceAsStream("mypic.jpg");
byte imageBytes[] = new byte[iStream.available()];
iStream.read(imageBytes);
return imageBytes;
}