On Wed, Dec 09, 2015 at 02:59:21PM +0000, Liang, Kan wrote:
> > diff --git a/kernel/events/core.c b/kernel/events/core.c index
> > 36babfd..97aa610 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -3508,11 +3515,6 @@ retry:
> >             if (!ctx)
> >                     goto errout;
> > 
> > -           if (task_ctx_data) {
> > -                   ctx->task_ctx_data = task_ctx_data;
> > -                   task_ctx_data = NULL;
> > -           }
> > -
> >             err = 0;
> >             mutex_lock(&task->perf_event_mutex);
> >             /*
> > @@ -3526,6 +3528,10 @@ retry:
> >             else {
> >                     get_ctx(ctx);
> >                     ++ctx->pin_count;
> > +                   if (task_ctx_data) {
> > +                           ctx->task_ctx_data = task_ctx_data;
> > +                           task_ctx_data = NULL;
> > +                   }
> >                     rcu_assign_pointer(task->perf_event_ctxp[ctxn],
> > ctx);
> >             }
> >             mutex_unlock(&task->perf_event_mutex);
> > 
> > 
> > Does that make sense? No point in setting task_ctx_data if we're going to
> > free the ctx and try again.
> 
> The task_ctx_data will be checked before use. So it wouldn't crash the
> system if it's NULL.

Yeah, I know, I checked :-)

> The problem is that LBR stack info will not be save/store on context
> switch anymore. The user probably get wrong call stack information.

Yep

> May I know why you want to do that?

Because this seemed like a less fragile construct. When there's multiple
event creations racing it seems possible (ableit entirely unlikely) to
assign the allocated task_ctx_data to a ctx that we'll delete, and on
the second go around re-allocate a ctx, but are left wihtout
task_ctx_data to assign to it.

So by only assigning the task_ctx_data when we _know_ we've succeeded,
we'll avoid this scenario.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to