On Wed, 2019-09-25 at 20:52 +0800, Yunfeng Ye wrote:
> It's not necessary to put kfree() in the critical area of the lock, so
> let it out.
> 
> Signed-off-by: Yunfeng Ye <yeyunf...@huawei.com>
> ---
>  kernel/async.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/async.c b/kernel/async.c
> index 4f9c1d6..1de270d 100644
> --- a/kernel/async.c
> +++ b/kernel/async.c
> @@ -135,12 +135,12 @@ static void async_run_entry_fn(struct work_struct *work)
>       list_del_init(&entry->domain_list);
>       list_del_init(&entry->global_list);
> 
> -     /* 3) free the entry */
> -     kfree(entry);
>       atomic_dec(&entry_count);
> -
>       spin_unlock_irqrestore(&async_lock, flags);
> 
> +     /* 3) free the entry */
> +     kfree(entry);
> +
>       /* 4) wake up any waiters */
>       wake_up(&async_done);
>  }

It probably wouldn't hurt to update the patch description to mention that
async_schedule_node_domain does the allocation outside of the lock, then
takes the lock and does the list addition and entry_count increment inside
the critical section so this is just updating the code to match that it
seems.

Otherwise the change itself looks safe to me, though I am not sure there
is a performance gain to be had so this is mostly just a cosmetic patch.

Reviewed-by: Alexander Duyck <alexander.h.du...@linux.intel.com>

Reply via email to