On Mon, Jul 29, 2013 at 10:29:45AM +0800, Xie XiuQi wrote:
> We used csd_flags formerly because we allocated csd_data by
> kmalloc when "wait == 0". When fail to allocation, we will
> fall back to on-stack allocation. "csd_data" might be invalid
> after generic_exec_single return.
> 
> But now we use per cpu data for single cpu ipi calls, and
> csd_data can't fall back to on-stack allocation when "wait == 0".
> 
> So csd_flags is unnecessary now. Remove it.

The much simpler argument is that both callsites of
generic_exec_single() do an unconditional csd_lock().

> Signed-off-by: Xie XiuQi <xiexi...@huawei.com>
> ---
>  kernel/smp.c |   11 +----------
>  1 files changed, 1 insertions(+), 10 deletions(-)
> 
> diff --git a/kernel/smp.c b/kernel/smp.c
> index 4dba0f7..cac2b6e 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -186,25 +186,16 @@ void generic_smp_call_function_single_interrupt(void)
> 
>       while (!list_empty(&list)) {
>               struct call_single_data *csd;
> -             unsigned int csd_flags;
> 
>               csd = list_entry(list.next, struct call_single_data, list);
>               list_del(&csd->list);
> 
> -             /*
> -              * 'csd' can be invalid after this call if flags == 0
> -              * (when called through generic_exec_single()),
> -              * so save them away before making the call:
> -              */
> -             csd_flags = csd->flags;
> -
>               csd->func(csd->info);
> 
>               /*
>                * Unlocked CSDs are valid through generic_exec_single():
>                */
> -             if (csd_flags & CSD_FLAG_LOCK)
> -                     csd_unlock(csd);
> +             csd_unlock(csd);

The comment is completely useless and confusing after this; why do you
leave it in? 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to