Hi,

Comments below:

> On Dec 14, 2016, at 6:48 AM, Daniel Vetter <daniel at ffwll.ch> wrote:
> 
> On Wed, Dec 14, 2016 at 3:11 PM, Daniel Vetter <daniel at ffwll.ch> wrote:
>> On Wed, Dec 14, 2016 at 03:32:16PM +0200, Mikko Perttunen wrote:
>>> On 14.12.2016 15:05, Daniel Vetter wrote:
>>>> On Wed, Dec 14, 2016 at 02:41:28PM +0200, Mikko Perttunen wrote:
>>>>> On 14.12.2016 14:30, Daniel Vetter wrote:
>>>>>> On Wed, Dec 14, 2016 at 01:16:10PM +0200, Mikko Perttunen wrote:
>>>>>>> This series adds IOMMU support to Host1x and TegraDRM
>>>>>>> and adds support for the VIC (Video Image Compositor)
>>>>>>> host1x client. The series is available as a git repository at
>>>>>>> git://github.com/cyndis/linux.git; branch vic-2.
>>>>>>> 
>>>>>>> A userspace test case for VIC can be found at
>>>>>>> https://github.com/cyndis/drm/tree/work/tegra.
>>>>>>> The testcase is in tests/tegra and is called submit_vic.
>>>>>>> The testcase/TRM include full headers and documentation
>>>>>>> to program the unit. The unit by itself, however, does not
>>>>>>> readily map to existing userspace library interfaces, so
>>>>>>> implementations for those are not provided.
>>>>>> 
>>>>>> Afaik libva has an entire pile of post-processing support. Pretty sure
>>>>>> other video transcode libraries have similar interfaces, so should all be
>>>>>> possible to implement this.
>>>>> 
>>>>> We don't have any actual video transcoding support though, so unless it's
>>>>> possible to just implement a part of libva and defer the rest to some CPU
>>>>> implementation, I don't see how this is useful. I suppose I could 
>>>>> implement
>>>>> a GStreamer plugin for colorspace conversion or resizing, since those are
>>>>> very modular.
>>>> 
>>>> Hm, I guess the question then is, how did that get enabled?
>>> 
>>> What is "that"? I'm not exactly sure.
>>> 
>>> Our architecture is such that there's the VIC that handles colorspace
>>> conversion, rescaling, blitting and can do some 2d postprocessing effects as
>>> well.
>>> 
>>> Then there's the separate NVDEC that is a video bitstream decoder. There's
>>> no support for that at the moment. I am working on the IP side of that.
>>> 
>>> The video processing pipeline is then such that NVDEC is fed the bitstream;
>>> NVDEC outputs a YUV picture in a specific format; VIC takes that YUV picture
>>> and converts/rescales it into the desired format. Or if we are encoding
>>> video, VIC takes your RGB image, converts it into a format that NVENC
>>> understands, and so on.
>>> 
>>> So with just VIC support, I could implement some simple 2D things. I don't
>>> know if anyone would want to specifically use the VIC for those since
>>> applications already have fast CPU algorithms. For the video pipeline using
>>> VIC is nice since these units can synchronize work without CPU involvement
>>> and when you're already using NVDEC or NVENC it's barely any extra effort to
>>> involve VIC as well. It can also be useful in power usage sensitive
>>> situations, but we aren't really fit for those situations with the upstream
>>> kernel anyway :)
>> 
>> Ah I thought the nvdec was already enabled, since for i915 that's how we
>> went about things (we have a pretty much exactly matching split in the
>> various video related engines). But if that's not there yet then no
>> worries, all fine.
>> 
>> Since you do seem to plan to enable everything anyway, might be worth it
>> to go directly with something like libva or libvdpau or whatever the cool
>> thing is. libva is my recommendation since it works on non-X11 too afaik,
>> but I have 0 clue. And might be worth it to check out whether you can't do
>> a super-basic libva driver that only does the post processing stuff. With
>> libva you can import/export images, so it might be possible even ... And
>> directly doing the full video engine support instead of a one-off in
>> gstreamer sounds more sensible to me.
> 
> Silly me forgot to add the experts, i.e. Sean (current libva
> maintainer) and libva mailing lists.
> -Daniel

Reply via email to