On 09/01/2022 21:15, Soft Works wrote:
-----Original Message-----
From: ffmpeg-devel <ffmpeg-devel-boun...@ffmpeg.org> On Behalf Of Mark
Thompson
Sent: Sunday, January 9, 2022 7:39 PM
To: ffmpeg-devel@ffmpeg.org
Subject: Re: [FFmpeg-devel] [PATCH v4 1/1] avutils/hwcontext: When deriving a
hwdevice, search for existing device in both directions
On 05/01/2022 03:38, Xiang, Haihao wrote:
... this patch really fixed some issues for me and others.
Can you explain this in more detail?
I'd like to understand whether the issues you refer to are something which
would be fixed by the ffmpeg utility allowing selection of devices for
libavfilter, or whether they are something unrelated.
(For library users the currently-supported way of getting the same device
again is to keep a reference to the device and reuse it. If there is some
case where you can't do that then it would be useful to hear about it.)
Hi Mark,
they have 3 workaround patches on their staging repo, but I'll let Haihao
answer in detail.
I have another question. I've been searching high and low, yet I can't
find the message. Do you remember that patch discussion from (quite a
few) months ago, where it was about another QSV change (something about
device creation from the command line, IIRC). There was a command line
example with QSV and you correctly remarked something like:
"Do you even know that just for this command line, there are 5 device
creations happening in the background, implicit and explicit, and in
one case (or 2), it's not even creating the specified device but
a session for the default device instead"
(just roughly from memory)
Do you remember - or was it Philip?
<https://lists.ffmpeg.org/pipermail/ffmpeg-devel/2021-March/277731.html>
Anyway, this is something that the patch will improve. There has been one
other commit since that time regarding explicit device creation from
Haihao (iirc), which already reduced the device creation and fixed the
incorrect default session creation.
Yes, the special ffmpeg utility code to work around the lack of
AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX in the libmfx decoders caused confusion
by working differently to everything else - implementing that and getting rid
of the workarounds was definitely a good thing.
My patch tackles this from another side: at that time, you (or Philip)
explained that the secondary context that QSV requires (VAAPI, D3Dx)
and that is initially created when setting up the QSV device, does not
even get used when subsequently deriving to a context of that kind.
Instead, a new device is being created in this case.
That's another scenario which is fixed by this patch.
It does sound like you just always want a libmfx device to be derived from the
thing which is really there sitting underneath it.
Then it would be clear that derivation in the other direction would always have
the retrieve-original-device meaning, rather than a fiction that you are
deriving a new D3D device from a libmfx one which doesn't match what is
actually happening.
It's a hybrid device context, that's the reason why QSV is more affected
than all other hwaccels as it consists of a QSV session already DERIVED
from a VAAPI or D3Dx device.
Example (let's assume Windows with D3D9): You go into decoding with a
QSV decoder in a QSV context and then you want to use an OpenCL filter.
This requires an OpenCL context, and of course you want to share the
frames memory. For memory sharing, OpenCL requires the underlying context
of the QSV session - in this example D3D9.
Before this patch - like you said - deriving devices was usually (except
reverse hwmap) forward-only. That means - you are stuck in this situation:
you could (forward-)derive to a D3D9 context, but that doesn't help: for
sharing the memory, you need to provide the original hw device to OpenCL,
you can't supply just another newly derived device of the same type.
And there is (was) no way to get the original hw context.
If you are a library user then you get the original hw context by reusing the
reference to it that you made when you created it. This includes libavfilter
users, who can provide a hw device to each hwmap separately.
If you are an ffmpeg utility user then I agree there isn't currently a way to
do this for filter graphs, hence the solution of providing an a way in the
ffmpeg utility to set hw devices per-filter.
Anyway I'm wondering whether it can even be logically valid to derive
from one device to another and then to another instance of the previous
device type.
From my understanding, "deriving" or "hw mapping" from one device to
another means to establish a different way or accessor to a common
resource/data, which means that you can access the data in one or the
other way.
Now let's assume a forward device-derivation chain like this:
D3D_1 >> OpenCL_1 >> D3D_2
You can't do this because device derivation is unidirectional (and acyclic) -
you can only derive an OpenCL device from D3D (9 or 11), not the other way
around.
Similarly, you can only map frames from D3D to OpenCL. That's why the hwmap
reverse option exists, because of cases where you actually want the other
direction which doesn't really exist.
(There are some cases like VAAPI <-> DRM which do offer frame mapping in both
directions, but that's really because DRM does not itself have any frame management.
Device derivation still only makes sense in one direction, as DRM sits beneath VAAPI.)
We have D3D surfaces, then we share them with OpenCL. Both *_1
contexts provide access to the same data.
Then we derive again "forward only" and we get a new D3D_2
context. It is derived from OpenCL_1, so it must provide
access to the same data as OpenCL_1 AND D3D_1.
Now we have two different D3D contexts which are supposed to
provide access to the same data!
1. This doesn't even work technically
- neither from D3D (iirc)
- nor from ffmpeg (not cleanly)
2. This doesn't make sense at all. There should always be
only a single device context of a device type for the same
resource
3. Why would somebody even want this - what kind of use case?
The multiple derivation case is for when a single device doesn't work.
Typically that involves multiple separate components which don't want to
interact with the others, for example:
* When something thread-unsafe might happen, so different threads need separate
instances to work with.
* When global options have to be set on a device, so a component which does
that needs its own instance to avoid interfering with anyone else.
* When some code (an external library, say) requires ownership of a device
instance, so you need a new one to give to it.
* When components are independent and your code is just simpler that way.
Thanks,
- Mark
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".