On 19.01.2022 12:50, Hans Petter Selasky wrote:
On 1/18/22 22:35, Vladimir Kondratyev wrote:

Any ideas how to avoid it in a generic way?

Hi,

On a non-SMP system this will lead to deadlock.

Is it possible you can pre-lock this spin lock earlier, so that it is already locked, so instead of

while(trylock());

You have:

assert (trylock() XXX)


Unfortunately, vulnerable functions are called in too many code paths to patch them all.

Or else,

convert this particular lock to a native FreeBSD spinlock mutex.

It can be done for wake_up() but not for dma_fence_signal() which suffers from this problem too. Some code that uses that lock expect it to be spinlock_t

I think we can just drop critical section in seqlock-related part of dma-buf code and replace it with rwlock as kib@ and mjg@ suggested. Leave seqlock for actual locking to preserve semantics as much as possible and add rwlock to implement reader's blocking. Following snippets show code conversion required for this change:


Lock seqlock as reader:

retry:
        seq = read_seqcount_begin(&obj->seq);

... reader payload ...
        
        if (read_seqcount_retry(&obj->seq, seq)) {
#ifdef __FreeBSD__
                /* Wait for rwlock to be released by writer */
                rw_rlock(&obj->rwlock);
                rw_runlock(&obj->rwlock);
#endif
                goto retry;
        }


Lock seqlock as writer:

#ifdef __linux__
        preempt_disable();
#elif defined (__FreeBSD__)
        rw_wlock(&obj->rwlock);
#endif
        write_seqcount_begin(&obj->seq);

... writer payload ...

        write_seqcount_end(&obj->seq);
#ifdef __linux__
        preempt_enable();
#elif defined (__FreeBSD__)
        rw_wunlock(&obj->rwlock);
#endif

--
WBR
Vladimir Kondratyev

Reply via email to