Loading a jpeg really fast

E

E. Naubauer

Hello folks, I have a huge problem and I'm pretty desperate now

For my diploma, I hav to code an app that basically reads jpeg images
from an Axis 2420 cameras mjpeg stream, decodes them, manipulates the
resulting rgb images an draws them back to the screen. I cannot use the
standard jai classes since every image is wrapped in a HTTP message
which I have to remove.

I can read the jfif data to a byte array. I also have access to an input
stream. I tried several options, but they all didn't give me the results
I needed:

- Getting an Image object with

Toolkit.getDefaultToolkit().createImage(myjfifbytedata, 0, j);

and drawing it with g.drawImage paint routine is VERY FAST.
However, since Image is an abstract class, I have no access to the pixel
data which I need to manipulate. So I tried option number 2:

- Getting a BufferedImage Object with

decoder = JPEGCodec.createJPEGDecoder(datainstream);
myBufferedImage = decoder.decodeAsBufferedImage();

This gives me direct access to the data, but I had to use a media
tracker so the images are correctly drawn. However, this took very long
(about 200 ms fon one image!) which is inacceptable since I still need
to manipulate the data.

I also tried

- Getting a RenderedImage with

ImageDecoder dec = ImageCodec.createImageDecoder(names[0], myinstream,
null);
image = dec.decodeAsRenderedImage();

and drawing it with


Graphics2D g2 = (Graphics2D)g;
g2.drawRenderedImage(image,AffineTransform.getTranslateInstance(0, 0));

It was even worse.

the fourth option, ImageIO didn't work at all. I fed it with my
InputStream Object via the read method, however it didn't seem to read
the jfif data from the stream right (it seemed to read more bytes than
it was supposed to).


I don't have any more clues how to speed this whole thing up. I have a
JNI dll which does the really hard image processing work. It takes a
byte array representing the image thats what I need that for. I tried a
couple of JIT compilers, also JET etc. and it didn't help.

My machine is a 1.25 Ghz Powerbook Alu with Mac Os X Tiger 10.4.4 and
Java Runtime 1.5 .

I would be really grateful for any ideas. Thank you in advance.
 
A

Andrey Kuznetsov

- Getting a BufferedImage Object with
decoder = JPEGCodec.createJPEGDecoder(datainstream); myBufferedImage =
decoder.decodeAsBufferedImage();

This gives me direct access to the data, but I had to use a media tracker
so the images are correctly drawn. However, this took very long (about 200
ms fon one image!) which is inacceptable since I still need to manipulate
the data.

forget MediaTracker.
It needed only with old java imaging model (ImageProducer/ImageConsumer)
 
R

Roedy Green

Toolkit.getDefaultToolkit().createImage(myjfifbytedata, 0, j);

and drawing it with g.drawImage paint routine is VERY FAST.
However, since Image is an abstract class, I have no access to the pixel
data which I need to manipulate. So I tried option number 2:

Try this little experiment.

Object o = Toolkit.getDefaultToolkit().createImage(myjfifbytedata, 0,
j);

System.out.println( o.getClass() );

Now you will know what sort of beast createImage ACTUALLY gives you.
You can then cast to that and get access to more methods.
 
A

Andrey Kuznetsov

I can read the jfif data to a byte array. I also have access to an input
stream. I tried several options, but they all didn't give me the results I
needed:

- Getting an Image object with

Toolkit.getDefaultToolkit().createImage(myjfifbytedata, 0, j);

and drawing it with g.drawImage paint routine is VERY FAST.
However, since Image is an abstract class, I have no access to the pixel
data which I need to manipulate.

you may try following:

step one: only once - create BufferedImage of desired size and get Graphics
from it.
and then for each frame you need to
step two: create Image with
Toolkit.getDefaultToolkit().createImage(myjfifbytedata, 0, j);
step four: draw loaded image to BufferedImage's Graphics.
step fife: manipulate pixels and so on...
step six : goto step two.

--
Andrey Kuznetsov
http://uio.imagero.com Unified I/O for Java
http://reader.imagero.com Java image reader
http://jgui.imagero.com Java GUI components and utilities
..
 
C

Chris Uppal

E. Naubauer said:
I don't have any more clues how to speed this whole thing up. I have a
JNI dll which does the really hard image processing work. It takes a
byte array representing the image thats what I need that for.

It might be worthwhile looking a free, fast, JPEG decoding library in C and (if
you find one) adding that to your JNI stuff, so that you bypass using Java APIs
to decode the data in the first place.

-- chris
 
A

Andrey Kuznetsov

It might be worthwhile looking a free, fast, JPEG decoding library in C
and (if
you find one) adding that to your JNI stuff, so that you bypass using Java
APIs
to decode the data in the first place.

Java already uses free jpeg decoder written in C - libjpeg.
 
C

Chris Uppal

Andrey Kuznetsov wrote:

[me:]
It might be worthwhile looking a free, fast, JPEG decoding library in C
[...]
Java already uses free jpeg decoder written in C - libjpeg.

That sounds very plausible to me.

The point is that if the OP can't find an /API/ which lets him use the built-in
stuff with adequate speed, then -- since he's already using JNI -- he may find
it worthwhile to use JNI for the decoding too.

In case it's not clear, the idea isn't "use JNI becuase it's quicker than Java"
(not true in this case if Java's using JNI anyway ;-), but that /IF/ the
existing implementation is hidden too deeply for him to be able to get at the
features he needs, he might consider a different way of satisfying the
requirement even if that does mean duplicating an existing feature.

-- chris
 
A

Andrey Kuznetsov

In case it's not clear, the idea isn't "use JNI becuase it's quicker than
Java"
(not true in this case if Java's using JNI anyway ;-), but that /IF/ the
existing implementation is hidden too deeply for him to be able to get at
the
features he needs, he might consider a different way of satisfying the
requirement even if that does mean duplicating an existing feature.

ImageIO decoder is also extremely quick.
He must doing something wrong.
I tested it with pretty big image (4000x8000) and it was ready in 1-2
seconds.
 
B

bugbear

E. Naubauer said:
Hello folks, I have a huge problem and I'm pretty desperate now

For my diploma, I hav to code an app that basically reads jpeg images
from an Axis 2420 cameras mjpeg stream, decodes them, manipulates the
resulting rgb images an draws them back to the screen. I cannot use the
standard jai classes since every image is wrapped in a HTTP message
which I have to remove.

Hmm. Any way to create a simple, buffered, stream decorator
(AKA filter) ?

BugBear
 
O

Oliver Wong

Andrey Kuznetsov said:
ImageIO decoder is also extremely quick.
He must doing something wrong.
I tested it with pretty big image (4000x8000) and it was ready in 1-2
seconds.

According to the OP, 200ms is already "unacceptable" for his purposes,
so 1-2 seconds is 5 to 10 times worse than "unacceptable".

- Oliver
 
E

E. Naubauer

The reason why I didn't use ImageIO was because it doesn't seem to read
from my stream correctly. As I mentioned earlier, the jpeg frames are
encoded as a stream of http frames which each frame wraps one jpeg
image. Usually I remove the http header from the stream, decode the
image from the stream, remove the trailer etc.. All other decoders
exactly removed the jfif data from the stream and decoded it afterwards.
For some reason, however, ImageIO doesn't seem to stop exactly after the
end of the jfif data so I couldn't determine the streams position and
what to do next. It also seems that ImageIO closes the stream after
reading from it. I tried to work around that issue by creating a
JPEGImageDecoder object manually but it didn't work also. Does anyone
know how much data ImageIO gets from the stream and if it blocks etc.?
 
A

Andrey Kuznetsov

ImageIO decoder is also extremely quick.
According to the OP, 200ms is already "unacceptable" for his purposes,
so 1-2 seconds is 5 to 10 times worse than "unacceptable".

his camera gives surely much smaller images.
4000x4000 shoud be 2 times faster
and 2000x2000 should be 8 times faster.
 
E

E. Naubauer

Roedy said:
Try this little experiment.

Object o = Toolkit.getDefaultToolkit().createImage(myjfifbytedata, 0,
j);

System.out.println( o.getClass() );

Now you will know what sort of beast createImage ACTUALLY gives you.
You can then cast to that and get access to more methods.



Actually it seems to be an object of the type
OSXImage
which shoudln't surprise since I'm using Mac Os X 10.4 :)
It offers a method named
getBufferedImage
which indeed delivers what it promises. However, it still doesn't run
that fast and I wonder if it isn't a different problem than jpeg decoding.

I might post the critical sourcecode parts, but they are pretty big.
All in all, my program has 3 Threads:

- 2 Threads for decoding the camera images
- 1 Thread for drawing to screen

This is the run Method of the camera threads:



public void run()
{
//test if still connected
if(!connected)
return;


try
{
//decode stream
if(connected)
{
while(!Thread.currentThread().isInterrupted())
{
//BufferedImage. Can be accessed from
//outside via getLatestImage() method
image = null;

String header;
int k = 0;

do
header = datainstream.readLine();
while(++k < 3);

datainstream.readByte();
int j = (new Integer(header.substring("Content-Length:
".length()))).intValue();
datainstream.readByte();
byte abyte1[] = new byte[j + 2];
datainstream.readFully(abyte1, 0, j + 2);
datainstream.readLine();
//header decoding ends here


//abyte1 contains the raw Jfif data now which can be
decoded by the toolkit
Image i = Toolkit.getDefaultToolkit().createImage(abyte1,
0,j+2);
//Thread.currentThread().sleep(1000);
//System.out.println(o.toString());


//I have to use MediaTracker, or the image drawn later is all white
MediaTracker tracker = new MediaTracker(new Frame());
tracker.addImage(i,0);

try {
tracker.waitForAll();
} catch(InterruptedException iex)
{
iex.printStackTrace();
}

tracker.removeImage(i);

//we know that the image is from type apple.awt.OSXImage
image = ((apple.awt.OSXImage)i).getBufferedImage();

//invokes stateChanged() methods on all objects that are
interested in the images
fireChange();


//decodeNext = false;
//((apple.awt.OSXImage)o).preload(this);


}
}
catch(Exception ex)
{
ex.printStackTrace();
}
finally
{

try {
datainstream.close();
} catch(Exception ex)
{
ex.printStackTrace();
}
}
}



So the thread calls method stateChanged() via fireChange() on another
object which shows interest in the images. It looks like this:



class MainProgram
extends Frame
implements Runnable,ChangeListener
{

<.....>

BufferedImage currentLeftImage,currentRightImage;
Runnable drawThread;

CameraCanvas leftCameraCanvas = null;
CameraCanvas rightCameraCanvas = null;

<.....>

public void stateChanged(ChangeEvent e)
{

if(e.getSource().equals(leftCamera))
{

currentLeftImage = leftCamera.getLatestImage();


}
if(e.getSource().equals(rightCamera))
{


currentRightImage = rightCamera.getLatestImage();


}

//invoke the drawing thread
EventQueue.invokeLater(drawThread);

}

}

The drawing thread resides on the same class and looks like this

drawThread = new Runnable()
{

public void run()
{
if(!readyToDraw)
return;

repaint();

try {
Thread.currentThread().sleep(1);
} catch(InterruptedException iex)
{
iex.printStackTrace();
}

}
};

This class represents also the main window. It has 2 Canvases,
CameraCanvas leftCameraCanvas = null;
CameraCanvas rightCameraCanvas = null;

This is the paint() Method of MainProgram that is called by the
drawThread thread


public void paint(Graphics g)
{

leftCameraCanvas.SetImage(currentLeftImage);
rightCameraCanvas.SetImage(currentRightImage);
leftCameraCanvas.repaint();
rightCameraCanvas.repaint();
}


And finally, the source of the CameraCanvas class:


public class CameraCanvas extends Canvas
{
private BufferedImage image = null;

public void paint(Graphics g)
{

if(image !=null)
g.drawImage(image,0,0,getWidth(),getHeight(),null);

}

public void SetImage(BufferedImage i)
{
image = i;
}
}



Maybe you have an idea. I can send you the sourcecode files directly if
you need and want to.
 
R

Richard Wheeldon

E. Naubauer said:
Hello folks, I have a huge problem and I'm pretty desperate now

For my diploma, I hav to code an app that basically reads jpeg images
from an Axis 2420 cameras mjpeg stream, decodes them, manipulates the
resulting rgb images an draws them back to the screen.

Go to the web server on the Axis camera. Download the jar
file for the display applet. Decompile it. If you want a good
decompiler, try here: http://www.kpdus.com/jad.html

There's also a java program to display images from Axis
cameras on sourceforge,

Richard
 
E

E. Naubauer

I know about that implementation. The interesting point here is that
they directly take the abstract image object produced by
Toolkit.getDefaultToolkit().createImage
and pass it to drawimage in the paint method of the displaying frame.
For some reason, this works perfectly even without a media tracker to
ensure that the images are fully loaded before drawing.
 
R

Roedy Green

It also seems that ImageIO closes the stream after
reading from it
you could suck the bytes out yourself and feed ImageIO the byte[] or
a ByteArrayInputStream wrapper around it.

I solved this particular problem like this in my JPG streaming
protocol:
-----------------------------------
byte[] rawImage = conx.receiveBytes( imageLength );

if ( ! isJpgValid ( rawImage ) )
{
throw new IOException ( "Invalid JPG image signature: " +
Integer.toHexString( rawImage[0] ) + " " + Integer.toHexString(
rawImage[1] ) );
}
// start process of converting image from jpg to internal
format
// old AWT style:
// Image image = toolkit.createImage( rawImage );

// new ImageIO style, convert raw bytes to BufferedImage
BufferedImage image = ImageIO.read ( new ByteArrayInputStream (
rawImage ) );
------------------------------
/**
* Does this image represent a valid JPG file?
* Does a rough check for the signature.
*
* @param image array of bytes representing the image
* @return true if this image has a valid JPG signature.
*/
public static boolean isJpgValid ( byte image[] )
{
return( ( image[0] & 0xff ) == 0xff ) && ( (image[1] & 0xff )
== 0xd8 );
}
 
E

E. Naubauer

Indeed, it is significantly faster than the toolkit method.
I tried it with 2 cameras on, drawing 2 images to a canvas each at the
same time. With 1 camera there was a delay of 1-2 seconds between
movement and getting drawn to the screen , with both cameras the delay
took about 5 Seconds.

Well, thanks for all your advice so far.

Do you have benchmarks about how your protocol implementation does with,
say ,2 Streams at once?
 
A

Andrey Kuznetsov

I know about that implementation. The interesting point here is that they
directly take the abstract image object produced by
Toolkit.getDefaultToolkit().createImage
and pass it to drawimage in the paint method of the displaying frame. For
some reason, this works perfectly even without a media tracker to ensure
that the images are fully loaded before drawing.

probably because they also passed some Component as ImageObserver to
drawImage method.
 
B

bugbear

E. Naubauer said:
The reason why I didn't use ImageIO was because it doesn't seem to read
from my stream correctly. As I mentioned earlier, the jpeg frames are
encoded as a stream of http frames which each frame wraps one jpeg
image. Usually I remove the http header from the stream, decode the
image from the stream, remove the trailer etc.. All other decoders
exactly removed the jfif data from the stream and decoded it afterwards.
For some reason, however, ImageIO doesn't seem to stop exactly after the
end of the jfif data so I couldn't determine the streams position and
what to do next. It also seems that ImageIO closes the stream after
reading from it.

Sounds like simple buffering to me; if it
decorates its InputStream with a BufferredInputStream
(for performance), the BufferredInputStream knows
nothing about the boundaries in the InputStream.

Buffering Systems read ahead - that's the whole point!

BugBear
 
R

Roedy Green

Do you have benchmarks about how your protocol implementation does with,
say ,2 Streams at once?

We ran maybe 10 streams at once. These were security cameras, pretty
crappy image quality, some with motion detect.

We had a server running in C that talked to the camera hardware. I
wrote the Java client viewers. Fun stuff. I had "magnaviewers" as
well where you could zoom the image size up and down smoothly.

We have two modes, live and historic. With historic you could look
back in time like a VCR fast forwarding or reversing with a speed
slider.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top