--reduce-hf doesn't actually throw away the higher frequency DCT bins.
        What it does is simply increase their quantisation (which of course means
        more low-amplitude ones get 0-ed).

Is it easy to make the amount of the increase and the transition
adjustable?  To give yet another knob to tweak?

        The Problem is you can't be so selective - the decoder assumes a fixed
        quantisation matrix for each DCT that you can specify in each picture
        coding sequence.   However, a well-known trick is to selectively 0 
        isolated bins (they have a high coding overhead) you think won't reduced SNR
        too much.

I need to learn more about the mpeg2 quantization process to follow
this, I'm afraid.

        You hit the nail on the head on this one.   I build scalers (and hence
        pre and reconstruction filters) for my job.  Quite apart from my normal 
        struggle just to halfways find time to properly develop mpeg2enc 
        this means that I can't easily do this for the project without risking nasty 
        IP issues.

As it happens, I optimize FIR filters for a (small part of my) living.
No IP issues either.  1-D, 2-D, you name it.  Now mostly I deal with
fixed filters, rather than parametrized ones, but then it's easy to
compute a big table of filter coefficients and store them in the
program.

I did some testing on a 720x480 screen grab, and it seemed like I
could lowpass off quite a bit of Y spectrum before it got too blurry.
But how to deal with chroma in interlaced frames - field-by-field?
And just where exactly in space and time are the subsampled chroma
pixels located?  I'm never sure if 120 rows go with each frame, or if
240 go with one frame and none with the other.

Probably if you reduce the horizontal bandwidth by 2 or more you should
just use 352x480 as your frame size.

        It should be possible to write a nice generic pair of forward and transposed 
        FIR (ideally: pixel difference based to eliminate DC ripple) routines for MMX 
        without too much hassle.

Well, perhaps we can team up, as I know nothing about fast MMX
routines.  My filtering is generally done either in matlab
(interpreted) or in FPGAs (hardware).  

For efficiency, though, you might want to use fft-based convolution
instead of direct/transposed form.  Asymptotically the former is
O(nlog n) rather than O(n^2) for the latter (or something like that;
I'm being sloppy).  There is a mjpeg-based scaler out there that does
this in 1-D (I forget the name).

Dan



-------------------------------------------------------
This SF.net email is sponsored by:Crypto Challenge is now open! 
Get cracking and register here for some mind boggling fun and 
the chance of winning an Apple iPod:
http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0031en
_______________________________________________
Mjpeg-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/mjpeg-users

Reply via email to