For an assignment I'm required to find out how to remove camera shake from
JPEG images using Perl.
I'm stumped! I'm not after a solution, but would really appreciate direction
to sources of information that may help me to figure out how to accomplish
this task.
Regards,
Murray R. Van Luyn.
Just a followup, mpegs are composed of many thousands, if not millions of frames
depending on how big it is, which I'm sure you know. I wouldn't imagine those frames
just lying around on a hardisk in the un-compressed state.
You can't just "edit" bitmap images and remake the mpeg video. Its a process that has
to be done on the fly. Each frame is decompressed, then altered, then recompressed
into a new video. In that process, it loses bit information and degrades in quality.
As each frame is decompressed in the stream, it can be altered, then sent to a
compressor like CCE (CinemaCraft Encoder or a couple of others).
CCE costs around $2,000 for a license. Or you could purchace a license for a copy
of the mpeg2 specifications for around $3,000 and write your own encoder in Perl.
There are not really 30 fps stored in the mpeg. An algo is used that just
stores the difference binary between frames (back to the I-Frame). When the difference
passes the size of a full frame threshold, a new datum, called an I-Frame is inserted,
then the process repeats. Of course this is in general as sub D-Frames are factored
out as well as other optimizations.
When a frame (image) is to be reconstituted (decompressed), it's formed from several
binary regions in the I-Frame grouping. In general, the difference binary goes through
an encoding so decoding will produce the image that was encoded, mostly for viewing.
Encoding then decoding will not produce the exact same image that was originally encoded.
There are many utilities on "Doom9.org" that can help you to filter streaming frames (images).
The folks that write them are real genius types. All these are proofs. By that I mean
its a pure software solution, but could easily be ported to realtime video specific
hardware/firmware, and I'm sure they are. Doom9 will give you the poor man's solution to
big time video production. Even though its shareware, its fairly complex and arcane for
beginners.
Fully uncompressed images from a 2-hour mpeg2 medium-high bitrate, 2 hour, quality movie, will
generate about 300-500 gigabytes of data on a harddrive. For that reason, image processing is
usually done on the frame level, then that frame is fed to the encoder. I'm not saying those
500 gigs of images can't be fed to the encoder, but its just not done that way, usually.
The methods used usually include a decompressor prgram (with an available codec) as a first
step. Dvd2Svcd is a good choice for this but not necessary.
Next is the heart of video processing, a program like AviSynth, which is a frameserver but
more important, a program that knows how to use plug-in filters, that modify the frames
(individually or in groups). This is where a possible jitter removal algo is used.
Filtering here is very complicated. Anything that can be done with any paint program,
stretching, gamma, sharp, contrast, brightnes, dubbing, overlays, to name a few.
There are hundreds of filters available. These are incredibly complex algols. They are
written entirely in assembly and is the heard of video editing/processing occurs.
After each frame (or series of frames) has run through AviSynth, it is sent on to the encoder.
CCE or TmpgEnc where it is packed into mpeg2 format (I-Frames and all).
AviSynth is a "master" program that coordinates this process. It is script driven, as with
most of these programs.
Don't worry about jitter removal. The assembly to do the simplest of filters on a single frame
is something you could not understand.
Doom9.org is your best bet, unless you think anybody here, or Perl could help you..
Hey, anythings possible I guess!!
good luck