Hi, I tried to give some thought to how this interface would work for decoders that can't or would prefer not to support multiple decode buffers, and most of the scenarios I came up with seem to work out, so overall I'm not opposed to it. Even though I still think this can be done by each driver without being in the interface, I agree it simplifies things and keeps most of the problems in XvMC rather than in every state tracker.
Anyway, here are some specific comments: + for (; num_macroblocks > 0; --num_macroblocks) { + mb->base.codec = PIPE_VIDEO_CODEC_MPEG12; + mb->macroblock_address = xvmc_mb->x + context->width_in_macroblocks * xvmc_mb->y; + mb->macroblock_type = xvmc_mb->macroblock_type; Why macroblock_address? A decoder that prefers x/y will have to do a div/mod to get it back, so it's better to just pass x/y along and let each decoder do what it wants with it. + void (*begin_frame)(struct pipe_video_decoder *decoder); + void (*end_frame)(struct pipe_video_decoder *decoder); The Nvidia HW decoder doesn't need these, but I think begin_frame/end_frame are not as useful as they can be because you don't always know which video buffers they will apply to. When you call this in RecursiveEndFrame() it can result in 0, 1, 2, or 3 end_frame() calls, but which video buffers does it apply to? The 0 and 3 case are obvious, but 1 or 2 is ambigious if the current video buffer has 1 or 2 reference frames. Also, based on the above, I think there is a bug in vl_mpeg12_end_frame(struct pipe_video_decoder *decoder). It will unmap and destroy the tex_transfers attached to the current buf, but what about the reference frames? It has the same problem, it doesn't know about the reference frames when end_frame is called so it operates on the same buf multiple times in RecursiveEndFrame(). + vldecoder->cur_buffer %= vldecoder->num_buffers; + + vldecoder->decoder->set_decode_buffer(vldecoder->decoder, vldecoder->buffers[vldecoder->cur_buffer]); + vldecoder->decoder->set_decode_target(vldecoder->decoder, vlsurf->video_buffer); + + return vlVdpDecoderRenderMpeg12(vldecoder->decoder, (VdpPictureInfoMPEG1Or2 *)picture_info, + bitstream_buffer_count, bitstream_buffers); Not sure I understand why this is necessary. Why do you need to use a different decode buffer for each frame? In the XvMC case you want to keep data for each target video buffer seperate, because the XvMC API allows you to submit macroblocks to any surface at any time. Why is this necessary for VDPAU if it requires the user to submit the entire frame at once? Those are the issues that I noticed at first. Martin may have some comments specific to his work with VPE. What do you think? _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev