Hi all,

Here is another iteration of the VAAPI encode patch series.

Changes:

* Support code moved to libavcodec.  Since there is already vaapi.[ch] there 
(decoding stuff), this has become vaapi_support.[ch].

* Added -hwaccel_output_format option, allowing explicit selection of the 
format used on output from the hwaccel decoder.  For VAAPI the default is nv12 
(the normal format of Intel codec surfaces), but it needs to be set to yuv420p 
to run FATE.  With that, decoder reinit now does the right thing around format 
selection, so h264-reinit-large_420_8-to-small_420_8 passes.

* Added -hwaccel_lax_profile_check option, which allows decoder profiles to not 
match exactly.  Some FATE streams require this option to pass on Intel 
(decoding H.264 extended with an H.264 high decoder).

* The filter now tries to support all input and output formats that the 
hardware can do.  It has also been renamed to vf_vaapi_scale, reflecting the 
fact that it is really implementing vf_scale using VAAPI.  It can also now be 
used independently of any other VAAPI components (though this is of no value 
except for testing).

* General VAAPI image format handling code added (map between AVPixelFormat and 
libva fourccs and format descriptors).

* Some memory leaks fixed by running in valgrind.  There is still a lot more 
output to go through here because the Intel libva driver is not at all clean, 
but the memory footprint no longer visibly tends to infinity.

* All error checks are now for < 0 rather than != 0.

* Mentions of libva VP9 and H.265 constants are guarded; it should now build 
with recent but not current libva.  If doing so, note that FATE fails more 
tests with an older libva because the Intel driver randomly prints some things 
to stdout.


Thanks,

- Mark



An aside on hardware-only transcode:

It works given something like:

./ffmpeg -threads 1 -vaapi :0 -hwaccel vaapi -hwaccel_output_format vaapi_vld 
-i in.mp4 -an -vf vaapi_scale=size=1280x720:force_vaapi_out=1 -c:v vaapi_hevc 
-qp 26 -idr_interval 120 out.mp4

(the scale filter is required, to make the formats do the right thing) and the 
patch:

diff --git a/ffmpeg_filter.c b/ffmpeg_filter.c
index bf484bb..9acf851 100644
--- a/ffmpeg_filter.c
+++ b/ffmpeg_filter.c
@@ -420,7 +420,7 @@ static int configure_output_video_filter(FilterGraph *fg, 
OutputFilter *ofilter,
     if (ret < 0)
         return ret;

-    if (codec->width || codec->height) {
+    if (0) {
         char args[255];
         AVFilterContext *filter;
         AVDictionaryEntry *e = NULL;

Can anyone explain what all of the code around here is actually trying to 
achieve?  In practice, the question it's asking is really "is this not the 
first time the filter graph has been initialised" (the codec sizes are not 
initialised the first time, and are thereafter).  This is fatal to the hwaccel 
transcode setup, because we get one initialisation with the software decoder 
format (YUV420P) and then a reinitialisation once the hardware format is known 
(VAAPI).  The code after then inserts a scale filter which barfs when given 
opaque VAAPI frames.


PS:  Can we make the string form of AV_PIX_FMT_VAAPI "vaapi" rather than 
"vaapi_vld"?  I don't know what the compatibility implications of this would be 
(if any).

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to