On Tue, Jul 25, 2017 at 10:50:13PM +0000, Mathieu Desnoyers wrote: > ----- On Jul 25, 2017, at 5:55 PM, Peter Zijlstra pet...@infradead.org wrote: > > > On Tue, Jul 25, 2017 at 02:19:26PM -0700, Paul E. McKenney wrote: > >> On Tue, Jul 25, 2017 at 10:24:51PM +0200, Peter Zijlstra wrote: > >> > On Tue, Jul 25, 2017 at 12:36:12PM -0700, Paul E. McKenney wrote: > [...] > > > >> But it would not be hard for userspace code to force IPIs by repeatedly > >> awakening higher-priority threads that sleep immediately after being > >> awakened, right? > > > > RT tasks are not readily available to !root, and the user might have > > been constrained to a subset of available CPUs. > > > >> > Well, I'm not sure there is an easy means of doing machine wide IPIs for > >> > !root out there. This would be a first. > >> > > >> > Something along the lines of: > >> > > >> > void dummy(void *arg) > >> > { > >> > /* IPIs are assumed to be serializing */ > >> > } > >> > > >> > void ipi_mm(struct mm_struct *mm) > >> > { > >> > cpumask_var_t cpus; > >> > int cpu; > >> > > >> > zalloc_cpumask_var(&cpus, GFP_KERNEL); > >> > > >> > for_each_cpu(cpu, mm_cpumask(mm)) { > >> > struct task_struct *p; > >> > > >> > /* > >> > * If the current task of @cpu isn't of this @mm, then > >> > * it needs a context switch to become one, which will > >> > * provide the ordering we require. > >> > */ > >> > rcu_read_lock(); > >> > p = task_rcu_dereference(&cpu_curr(cpu)); > >> > if (p && p->mm == mm) > >> > __cpumask_set_cpu(cpu, cpus); > >> > rcu_read_unlock(); > >> > } > >> > > >> > on_each_cpu_mask(cpus, dummy, NULL, 1); > >> > } > >> > > >> > Would appear to be minimally invasive and only shoot at CPUs we're > >> > currently running our process on, which greatly reduces the impact. > >> > >> I am good with this approach as well, and I do very much like that it > >> avoids IPIing CPUs that aren't running our process (at least in the > >> common case). But don't we also need added memory ordering? It is > >> sort of OK to IPI a CPU that just now switched away from our process, > >> but not so good to miss IPIing a CPU that switched to our process just > >> a little before sys_membarrier(). > > > > My thinking was that if we observe '!= mm' that CPU will have to do a > > context switch in order to make it true. That context switch will > > provide the ordering we're after so all is well. > > > > Quite possible there's a hole in, but since I'm running on fumes someone > > needs to spell it out for me :-) > > > >> I was intending to base this on the last few versions of a 2010 patch, > >> but maybe things have changed: > >> > >> https://marc.info/?l=linux-kernel&m=126358017229620&w=2 > >> https://marc.info/?l=linux-kernel&m=126436996014016&w=2 > >> https://marc.info/?l=linux-kernel&m=126601479802978&w=2 > >> https://marc.info/?l=linux-kernel&m=126970692903302&w=2 > >> > >> Discussion here: > >> > >> https://marc.info/?l=linux-kernel&m=126349766324224&w=2 > >> > >> The discussion led to acquiring the runqueue locks, as there was > >> otherwise a need to add code to the scheduler fastpaths. > > > > TL;DR.. that's far too much to trawl through. > > > >> Some architectures are less precise than others in tracking which > >> CPUs are running a given process due to ASIDs, though this is > >> thought to be a non-problem: > >> > >> https://marc.info/?l=linux-arch&m=126716090413065&w=2 > >> https://marc.info/?l=linux-arch&m=126716262815202&w=2 > >> > >> Thoughts? > > > > Yes, there are architectures that only accumulate bits in mm_cpumask(), > > with the additional check to see if the remote task belongs to our MM > > this should be a non-issue. > > This would implement a MEMBARRIER_CMD_PRIVATE_EXPEDITED (or such) flag > for expedited process-local effect. This differs from the "SHARED" flag, > since the SHARED flag affects threads accessing memory mappings shared > across processes as well. > > I wonder if we could create a MEMBARRIER_CMD_SHARED_EXPEDITED behavior > by iterating on all memory mappings mapped into the current process, > and build a cpumask based on the union of all mm masks encountered ? > Then we could send the IPI to all cpus belonging to that cpumask. Or > am I missing something obvious ?
I suspect that something like this would work, but I agree with your 2010 self, who argued that this should be follow-on functionality. After all, the user probably needs to be aware of who is sharing for other reasons, and can then make each process do sys_membarrier(). Thanx, Paul