On Tue, Oct 2, 2018 at 4:04 PM, Eric Dumazet <eduma...@google.com> wrote:
> On Tue, Oct 2, 2018 at 6:46 AM Dmitry Vyukov <dvyu...@google.com> wrote:
>>
>> On Tue, Oct 2, 2018 at 3:16 PM, Eric Dumazet <eduma...@google.com> wrote:
>> > On Tue, Oct 2, 2018 at 1:19 AM Dmitry Vyukov <dvyu...@google.com> wrote:
>> >>
>> >> On Tue, Oct 2, 2018 at 7:49 AM, Eric Dumazet <eduma...@google.com> wrote:
>> >>
>> >>
>> >> Does inet_frag_kill() hold fq->lock? I am missing how inet_frag_kill()
>> >> and inet_frags_exit_net() are synchronized.
>> >> Since you use smp_store_release()/READ_ONCE() they seem to run in
>> >> parallel. But then isn't it possible that inet_frag_kill() reads
>> >> nf->dead == 0, then inet_frags_exit_net() sets nf->dead, and then we
>> >> have the same race on concurrent removal? Or, isn't it possible that
>> >> inet_frag_kill() reads nf->dead == 1, but does not set
>> >> INET_FRAG_HASH_DEAD yet, and then inet_frags_free_cb() misses the
>> >> INET_FRAG_HASH_DEAD flag?
>> >>
>> >
>> > Yes this is kind of implied in my patch.
>> > I put the smp_store_release() and READ_ONCE exactly to document the
>> > possible races.
>> > This was the reason for my attempt in V1, doing a walk, but Herbert
>> > said walk was not designed for doing deletes.
>> >
>> > Proper synch will need a synchronize_rcu(), and thus a future
>> > conversion in net-next because we can not really
>> > add new synchronize_rcu() calls in an (struct
>> > pernet_operations.)exit() without considerable performance hit of
>> > netns dismantles.
>> >
>> > So this will require a conversion of all inet_frags_exit_net() callers
>> > to .exit_batch() to mitigate the cost.
>> >
>> > I thought of synchronize_rcu_bh() but this beast is going away soon anyway.
>>
>> But if this patch allows all the same races and corruptions, then
>> what's the point?
>
> Not really. The current races can last dozen of seconds, if youu have
> one million frags.
>
> With the fix, the race is in the order of one usec on typical hosts.

Ah, I see. A known bug probably worth a comment in the code.

Reply via email to