Am 26.06.2013 10:56, schrieb Ilia Mirkin:
On Wed, Jun 26, 2013 at 4:33 AM, Christian König
<deathsim...@vodafone.de> wrote:
Am 26.06.2013 05:29, schrieb Ilia Mirkin:

On Mon, Jun 24, 2013 at 2:13 PM, Christian König
<deathsim...@vodafone.de> wrote:
Am 24.06.2013 18:39, schrieb Ilia Mirkin:

On Mon, Jun 24, 2013 at 4:48 AM, Christian König
<deathsim...@vodafone.de> wrote:
Am 23.06.2013 18:59, schrieb Ilia Mirkin:

Signed-off-by: Ilia Mirkin <imir...@alum.mit.edu>
---

These changes make MPEG2 I-frames generate the correct macroblock data
(as
compared to mplayer via xvmc). Other MPEG2 frames are still misparsed,
and
MPEG1 I-frames have some errors (but largely match up).

NAK, zscan and mismatch handling are handled in vl/vl_zscan.c.

Please use/fix that one instead of adding another implementation.
Yes, I noticed these after Andy pointed out that my patch broke things
for him. Here's my situation, perhaps you can advise on how to
proceed:

NVIDIA VP2 hardware (NV84-NV96, NVA0) doesn't do bitstream parsing,
but it can take the macroblocks and render them. When I use my
implementation with xvmc, everything works fine. If I try to use vdpau
by using vl_mpeg12_bitstream to parse the bitstream, the data comes
out all wrong. It appears that decode_macroblock is called with data
before inverse z-scan and quantization, while mplayer pushes data to
xvmc after those steps. So should I basically have a bit of logic in
my decode_macroblock impl that says "if using mpeg12_bitstream then do
some more work on this data"? Or what data should decode_macroblock
expect to receive?

Yes exactly, for the bitstream case decode_macroblock gets the blocks in
original zscan order without mismatch correction or quantification.

You can either do the missing steps on the gpu with shaders or on the cpu
while uploading the data and use the entrypoint member on the decoder to
distinct between the different usecases.
That sounds reasonable. But I'm looking at the MPEG-2 spec, and it
goes something like (7.4.5):

if ( macroblock_intra ) {
    F’’[v][u] = ( QF[v][u] * W[w][v][u] * quantiser_scale * 2 ) / 32;
} else {
    F’’[v][u] = ( ( ( QF[v][u] * 2 ) + Sign(QF[v][u]) ) * W[w][v][u] *
quantiser_scale ) / 32;
}

Only the multiplication with W[w][v][u] (and IIRC the divide with 32) isn't
done in decode_dct, since everything else depends on the bitstream context.
I must be totally blind. Can you point out exactly where

( QF[v][u] * 2 ) + Sign(QF[v][u])

is being done for the non-intra case in decode_dct? I'm only seeing
QF[v][u] * quantiser_scale, never the + Sign() part.

Not sure I've ever implemented that :) It's indeed possible that's not handled anywhere.

Christian.


_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to