On 2016/8/17 2:27, Mark Thompson wrote:
> On 16/08/16 03:44, Jun Zhao wrote:
>>
>>
>> On 2016/8/16 10:14, Chao Liu wrote:
>>> On Mon, Aug 15, 2016 at 6:00 PM, Jun Zhao <mypopy...@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On 2016/8/16 1:48, Jean-Baptiste Kempf wrote:
>>>>> On 15 Aug, Hendrik Leppkes wrote :
>>>>>>> On Mon, Aug 15, 2016 at 10:22 AM, Jun Zhao <mypopy...@gmail.com>
>>>> wrote:
>>>>>>>>> add libyami decoder/encoder/vpp in ffmpeg, about build step,
>>>>>>>>> please refer to the link: https://github.com/01org/
>>>> ffmpeg_libyami/wiki/Build
>>>>>>>>>
>>>>>>>
>>>>>>> We've had patches for yami before, and they were not applied because
>>>>>>> many developers did not agree with adding more wrappers for the same
>>>>>>> hardware decoders which we already support.
>>>>>>> Please refer to the discussion in this thread:
>>>>>>> https://ffmpeg.org/pipermail/ffmpeg-devel/2015-January/167388.html
>>>>>>>
>>>>>>> The concerns and reasons brought up there should not really have
>>>> changed.
>>>>> I still object very strongly against yami.
>>>>>
>>>>> It is a library that does not bring much that we could not do ourselves,
>>>>> it duplicates a lot of our code, it is the wrong level of abstraction
>>>>> for libavcodec, it is using a bad license and there is no guarantee of
>>>>> maintainership in the future.
>>>>
>>>> I know the worry after read the above thread.For Intel GPU HW accelerate
>>>> decode/encode,
>>>> now have 3 options in ffmpeg:
>>>>
>>>> 1. ffmpeg and QSV (Media SDK)
>>>> 2. ffmpeg vaapi hw accelerate decoder/native vaapi encoder
>>>> 3. ffmpeg and libyami
>>>>
>>> Sorry for this little diversion: what are the differences between QSV and
>>> vaapi?
>>> My understanding is that QSV has better performance, while vaapi supports
>>> more decoders / encoders. Is that correct?
>>> It would be nice if there are some data showing the speed of these HW
>>> accelerated decoders / encoders.
>>
>> QSV has better performance is right, but libyami have more decoders/encoders 
>> than 
>> vaapi hw accel decoder/encoder. :)
>>
>> According our profile, the speed of QSV/Libyami/vaapi-hw accel decoder and 
>> native
>> vaapi encoder are: QSV > ffmpeg and libyami > vaapi-hw accel decoder and 
>> native
>> vaapi encoder
> 
> In a single ffmpeg process I believe that result, but I'm not sure that it's 
> the question you really want to ask.
> 
> The lavc VAAPI hwaccel/encoder are both single-threaded, and while they 
> overlap operations internally where possible the single-threadedness of 
> ffmpeg (the program) itself means that they will not achieve the maximum 
> performance.  If you really want to compare the single-transcode performance 
> like this then you will want to make a test program which does the threading 
> outside lavc.

I agree with you :), now I use thread in ffmpeg/yami encoder/decoder, and
 QSV (Media SDK) use the thread in the library, in this respect, compare the
one way (1 input/1 output)transcode speed is unfair to ffmpeg/vaapi.

> 
> In any case, I don't believe that the single generic transcode setup is a use 
> that many people are interested in (beyond testing to observe that hardware 
> encoders kindof suck relative to libx264, then using that instead).
> 
> To my mind, the cases where it is interesting to use VAAPI (or really any 
> hardware encoder on a normal PC-like system) are:
> 
> * You want to do /lots/ of simultaneous transcodes in some sort of server 
> setup (often with some simple transformation, like a scale or codec change), 
> and want to maximise the number you can do while maintaining some minimum 
> level of throughput on each one.  You can benchmark this case for VAAPI by 
> running lots of instances of ffmpeg, and I expect that the libyami numbers 
> will be precisely equivalent because libyami is using VAAPI anyway and the 
> hardware is identical.
> 
> * You want to do other things with the surfaces on your GPU.  Here, using 
> VAAPI directly is good because the DRM objects are easily exposed so you can 
> move surfaces to and from whatever other stuff you want to use (OpenCL, DRI2 
> in X11, etc.).
> 
> * You want to minimise CPU/power use when doing one or a small number of live 
> encodes/decodes (for example, video calling or screen recording).  Here 
> performance is not really the issue - any of these solutions suffices but we 
> should try to avoid it being too hard to use.
> 
> So, what do you think libyami brings to any of these cases?  I don't really 
> see anything beyond the additional codec support* - have I missed something?

vpp missing some features, e,g de-noise/de-interlance/...,but I think fill the 
gap is not difficulty, I hope I can submit some patch for this. :)

> 
> libyami also (I believe, correct me if I'm wrong) has Intel-specificity - 
> this is significant given that mesa/gallium has very recently gained VAAPI 
> encode support on AMD VCE (though I think it doesn't currently work well with 
> lavc, I'm going to look into that soon).
> 
> I haven't done any detailed review of the patches; I'm happy to do so if 
> people are generally in favour of having the library.
> 
> Thanks,
> 
> - Mark
> 
> 
> * Which is fixable.  Wrt VP8, I wrote a bit of code but abandoned it because 
> I don't know of anyone who actually cares about it.  Do you have some useful 
> case for it?  If so, I'd be happy to implement it.  I am already intending to 
> do VP9 encode when I have hardware available; VP9 decode apparently already 
> works though I don't have hardware myself.

Glad to hear you will implement VP9 encoder, for the VP8 decoder/encoder, 
I think a lot of webm file will benefit from this.

> ______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to