On 02/03, Andy Lutomirski wrote:
>
> @@ -911,6 +918,47 @@ static inline struct audit_context 
> *audit_alloc_context(enum audit_state state)
>       return context;
>  }
>  
> +void audit_inc_n_rules()
> +{
> +     struct task_struct *p, *g;
> +
> +     write_lock(&n_rules_lock);
> +
> +     if (audit_n_rules++ != 0)
> +             goto out;  /* The overall state isn't changing. */
> +
> +     read_lock(&tasklist_lock);
> +     do_each_thread(g, p) {
> +             if (p->audit_context)
> +                     set_tsk_thread_flag(p, TIF_SYSCALL_AUDIT);
> +     } while_each_thread(g, p);
> +     read_unlock(&tasklist_lock);

Cosmetic, but I'd suggest to use for_each_process_thread() instead
of do_each_thread/while_each_thread.

And I am not sure why n_rules_lock is rwlock_t... OK, to make
audit_alloc() more scalable, I guess. Please see below.

> @@ -942,8 +995,14 @@ int audit_alloc(struct task_struct *tsk)
>       }
>       context->filterkey = key;
>
> +     read_lock(&n_rules_lock);
>       tsk->audit_context  = context;
> -     set_tsk_thread_flag(tsk, TIF_SYSCALL_AUDIT);
> +     if (audit_n_rules)
> +             set_tsk_thread_flag(tsk, TIF_SYSCALL_AUDIT);
> +     else
> +             clear_tsk_thread_flag(tsk, TIF_SYSCALL_AUDIT);
> +     read_unlock(&n_rules_lock);

Perhaps this is fine, but n_rules_lock can't prevent the race with
audit_inc/dec_n_rules(). The problem is, this is called before the
new task is visible to for_each_process_thread().

If we want to fix this race, we need something like audit_sync_flags()
called after copy_process() drops tasklist, or from tasklist_lock
protected section (in this case it doesn't need n_rules_lock).

Or perhaps audit_alloc() should not try to clear TIF_SYSCALL_AUDIT at all.
In both cases n_rules_lock can be spinlock_t.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to