On Mon, Apr 08, 2019 at 09:12:28AM +0200, Thomas-Mich Richter wrote: > > Does the below cure things? It's not exactly pretty, but it could just > > do the trick. > > > > --- > > diff --git a/kernel/events/core.c b/kernel/events/core.c > > index dfc4bab0b02b..d496e6911442 100644 > > --- a/kernel/events/core.c > > +++ b/kernel/events/core.c > > @@ -2009,8 +2009,8 @@ event_sched_out(struct perf_event *event, > > event->pmu->del(event, 0); > > event->oncpu = -1; > > > > - if (event->pending_disable) { > > - event->pending_disable = 0; > > + if (event->pending_disable == smp_processor_id()) { > > + event->pending_disable = -1; > > state = PERF_EVENT_STATE_OFF; > > } > > perf_event_set_state(event, state); > > @@ -2198,7 +2198,7 @@ EXPORT_SYMBOL_GPL(perf_event_disable); > > > > void perf_event_disable_inatomic(struct perf_event *event) > > { > > - event->pending_disable = 1; > > + event->pending_disable = smp_processor_id(); > > irq_work_queue(&event->pending); > > } > > > > @@ -5822,8 +5822,8 @@ static void perf_pending_event(struct irq_work *entry) > > * and we won't recurse 'further'. > > */ > > > > - if (event->pending_disable) { > > - event->pending_disable = 0; > > + if (event->pending_disable == smp_processor_id()) { > > + event->pending_disable = -1; > > perf_event_disable_local(event); > > } > > > > @@ -10236,6 +10236,7 @@ perf_event_alloc(struct perf_event_attr *attr, int > > cpu, > > > > > > init_waitqueue_head(&event->waitq); > > + event->pending_disable = -1; > > init_irq_work(&event->pending, perf_pending_event); > > > > mutex_init(&event->mmap_mutex); > > > > Peter, > > very good news, your fix ran over the weekend without any hit!!! > > Thanks very much for your help. Do you submit this patch to the kernel > mailing list?
Most excellent, let me go write a Changelog. Could I convince you to implement arch_irq_work_raise() for s390?