Thanks, patch looks fine to me now.
While testing, I came across some very weird behavior:
The following commandline:
./ffmpeg -hwaccel cuvid -c:v h264_cuvid -i test.mkv -an -sn -c:v h264_nvenc
-preset slow -qp 22 -bf 0 -f null -
Sometimes, it runs into the following error:
[h264_nvenc @ 0x3f7d8c0] Failed locking bitstream buffer: invalid param (8)
Omitting "-hwaccel cuvid" prevents it from happening.
And, that's the weird parts, omitting "-bf 0" also prevents it from happening.
The weird thing about that is, 0 is the default. And some quickly added debug
printfs show that indeed the affected values do not change.
avctx->max_b_frames is 0 in both cases.
Reverting this patch also prevents it from happening, but I somehow doubt it's
the fault of this patch, specially as just running that command over and over
again makes it work in ~50% of the cases.
Also, the same command line, but with -bf 1 or any higher number causes:
[h264_nvenc @ 0x2c788c0] EncodePicture failed!: no encode device (1)
This is not a new issue, it is happening ever since. It only happens when
-hwaccel cuvid is specified.
It's not an issue with the CUDA Frame Input code in nvenc either, as passing
cuda frames via -vf hwupload_cuda works flawlessly.
It only happens in the combination of direct cuda frame input from cuvid to
nvenc.
Like I said, this is not a regression from this patch, just wanted to bring it
to attention, as it somehow feels like
a driver issue to me.
With this new -bf related issue, I'm not so sure about that anymore though, and
I'm wondering if something in ffmpeg corrupts memory somewhere, somehow, when
-bf is set.
Updated my nvidia driver (and gcc), and the super weird behavior went
away, only leaving the one that was there before(-hwaccel cuvid + -bf 1).
applied the patch now, don't think any of this was caused by it in the
first place.
./ffmpeg -hwaccel cuvid -c:v h264_cuvid -i test.mkv -an -sn -c:v
h264_nvenc -preset slow -qp 22 -bf 1 -f null -
This still throws the "no encode device (1)" error though, which doesn't
make any sense to me.
Would very much appreciate if someone from nvidia could have a look at
that, as it feels like a driver issue to me.
It only happens when doing -hwaccel cuvid decoding straight to nvenc,
and only when bframes are enabled.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel