Op 14-08-14 om 13:53 schreef Christian K?nig:
>> But because of driver differences I can't implement it as a straight wait 
>> queue. Some drivers may not have a reliable interrupt, so they need a custom 
>> wait function. (qxl)
>> Some may need to do extra flushing to get fences signaled (vmwgfx), others 
>> need some locking to protect against gpu lockup races (radeon, i915??).  And 
>> nouveau
>> doesn't use wait queues, but rolls its own (nouveau).
> But when all those drivers need a special wait function how can you still 
> justify the common callback when a fence is signaled?
>
> If I understood it right the use case for this was waiting for any fence of a 
> list of fences from multiple drivers, but if each driver needs special 
> handling how for it's wait how can that work reliable?
TTM doesn't rely on the callbacks. It will call .enable_signaling when 
.is_signaled is NULL, to make sure that fence_is_signaled returns true sooner.

QXL is completely awful, I've seen some patches to add dma-buf support but I'll 
make sure it never supports importing from/exporting to other devices. This 
should reduce insanity factor there.
If I understand QXL correctly, sometimes fences may never signal at all due to 
virt-hw bugs.

nouveau (pre nv84) has no interrupt for completed work, but it has a reliable 
is_signaled. So .enable_signaling only checks if fence is signaled here.
A custom waiting function makes sure things work correctly, and also signals 
all unsignaled fences for that context. I preserved the original wait from 
before the fence conversion.
Nouveau keeps a global list of unsignaled fences, so they will all signal 
eventually.
I may have to limit importing/exporting dma-buf's to other devices, or add 
delayed work that periodically checks all contexts for completed fences for 
this to work cross-device.

nv84+ has a sane interrupt, so I use it. :-)

Radeon with fence conversion has the delayed work for handling lockups that 
also checks.

~Maarten

Reply via email to