> You hit the nail on the head on this one. I build scalers > (and hence pre and reconstruction filters) for my job. Quite > apart from my normal struggle just to halfways find time to > properly develop mpeg2enc this means that I can't easily do > this for the project without risking nasty IP issues. > >As it happens, I optimize FIR filters for a (small part of my) living. >No IP issues either. 1-D, 2-D, you name it. Now mostly I deal with >fixed filters, rather than parametrized ones, but then it's easy to >compute a big table of filter coefficients and store them in the >program.
For me, scalers are only an occasional diversion from my daily grind.... For what it's worth, a 'blur factor' option is on my list of things to add to y4mscaler; I imagine it would simply scale the size of the kernel to lower the spatial cut-off frequency. Also for what it's worth, the "cubicB" kernel in y4mscaler (corresponding to a 'classic B-spline'), has a more gradual roll-off than the default "cubic" kernel --- I imagine it will give a bit more smoothing on sharp transitions. (I should go make a test page of impulse/edge responses.) ... >But how to deal with chroma in interlaced frames - field-by-field? >And just where exactly in space and time are the subsampled chroma >pixels located? I'm never sure if 120 rows go with each frame, or if >240 go with one frame and none with the other. Concise answers to those questions: http://www.mir.com/DMG/chroma.html If anyone feels like writing an 'optimized' scaling engine/backend which follows the API set in y4mscaler's "scaler.H", it will easily drop in to y4mscaler, and y4mscaler's frontend will take care of such annoying details. (This may or may not be useful, depending on how the engine is optimized.) > It should be possible to write a nice generic pair of forward > and transposed FIR (ideally: pixel difference based to > eliminate DC ripple) routines for MMX without too much hassle. (What does "pixel difference based to eliminate DC ripple" mean?) >For efficiency, though, you might want to use fft-based convolution >instead of direct/transposed form. Asymptotically the former is >O(nlog n) rather than O(n^2) for the latter (or something like that; >I'm being sloppy). There is a mjpeg-based scaler out there that does >this in 1-D (I forget the name). What is "n" up there? In 1-D, isn't it: O(n log n) to do an FFT/inverse-FFT, where n is the number of samples; but O(nk) to do a convolution, where k is the length of the kernel? If that's the case, FIR wins, since it is linear in 'n'. In 2-D, convolution becomes O((nk)^2), but won't the FFT square as well? Convolution still wins. Anyhow, asymptotic performance isn't so important here, since a current reasonable bound on n is 720. Would the constant factors further kill the FFT's performance, or help? -matt m. ------------------------------------------------------- This SF.net email is sponsored by:Crypto Challenge is now open! Get cracking and register here for some mind boggling fun and the chance of winning an Apple iPod: http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0031en _______________________________________________ Mjpeg-users mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/mjpeg-users