On Mon, Feb 18, 2013 at 06:06:42PM +0100, Svatopluk Kraus wrote: > On Mon, Feb 18, 2013 at 4:08 PM, Konstantin Belousov > <[email protected]> wrote: > > On Mon, Feb 18, 2013 at 01:44:35PM +0100, Svatopluk Kraus wrote: > >> Hi, > >> > >> the access to sysmaps_pcpu[] should be atomic with respect to > >> thread migration. Otherwise, a sysmaps for one CPU can be stolen by > >> another CPU and the purpose of per CPU sysmaps is broken. A patch is > >> enclosed. > > And, what are the problem caused by the 'otherwise' ? > > I do not see any. > > The 'otherwise' issue is the following: > > 1. A thread is running on CPU0. > > sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; > > 2. A sysmaps variable contains a pointer to 'CPU0' sysmaps. > 3. Now, the thread migrates into CPU1. > 4. However, the sysmaps variable still contains a pointers to 'CPU0' sysmaps. > > mtx_lock(&sysmaps->lock); > > 4. The thread running on CPU1 locked 'CPU0' sysmaps mutex, so the > thread uselessly can block another thread running on CPU0. Maybe, it's > not a problem. However, it definitely goes against the reason why the > submaps (one for each CPU) exist. So what ?
>
>
> > Really, taking the mutex while bind to a CPU could be deadlock-prone
> > under some situations.
> >
> > This was discussed at least one more time. Might be, a comment saying that
> > there is no issue should be added.
>
> I missed the discussion. Can you point me to it, please? A deadlock is
> not problem here, however, I can be wrong, as I can't imagine now how
> a simple pinning could lead into a deadlock at all.
Because some other load on the bind cpu might prevent the thread from
being scheduled.
>
> >>
> >> Svata
> >>
> >> Index: sys/i386/i386/pmap.c
> >> ===================================================================
> >> --- sys/i386/i386/pmap.c (revision 246831)
> >> +++ sys/i386/i386/pmap.c (working copy)
> >> @@ -4146,11 +4146,11 @@
> >> {
> >> struct sysmaps *sysmaps;
> >>
> >> + sched_pin();
> >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)];
> >> mtx_lock(&sysmaps->lock);
> >> if (*sysmaps->CMAP2)
> >> panic("pmap_zero_page: CMAP2 busy");
> >> - sched_pin();
> >> *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | PG_A | PG_M |
> >> pmap_cache_bits(m->md.pat_mode, 0);
> >> invlcaddr(sysmaps->CADDR2);
> >> @@ -4171,11 +4171,11 @@
> >> {
> >> struct sysmaps *sysmaps;
> >>
> >> + sched_pin();
> >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)];
> >> mtx_lock(&sysmaps->lock);
> >> if (*sysmaps->CMAP2)
> >> panic("pmap_zero_page_area: CMAP2 busy");
> >> - sched_pin();
> >> *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | PG_A | PG_M |
> >> pmap_cache_bits(m->md.pat_mode, 0);
> >> invlcaddr(sysmaps->CADDR2);
> >> @@ -4220,13 +4220,13 @@
> >> {
> >> struct sysmaps *sysmaps;
> >>
> >> + sched_pin();
> >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)];
> >> mtx_lock(&sysmaps->lock);
> >> if (*sysmaps->CMAP1)
> >> panic("pmap_copy_page: CMAP1 busy");
> >> if (*sysmaps->CMAP2)
> >> panic("pmap_copy_page: CMAP2 busy");
> >> - sched_pin();
> >> invlpg((u_int)sysmaps->CADDR1);
> >> invlpg((u_int)sysmaps->CADDR2);
> >> *sysmaps->CMAP1 = PG_V | VM_PAGE_TO_PHYS(src) | PG_A |
> >> @@ -5072,11 +5072,11 @@
> >> vm_offset_t sva, eva;
> >>
> >> if ((cpu_feature & CPUID_CLFSH) != 0) {
> >> + sched_pin();
> >> sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)];
> >> mtx_lock(&sysmaps->lock);
> >> if (*sysmaps->CMAP2)
> >> panic("pmap_flush_page: CMAP2 busy");
> >> - sched_pin();
> >> *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) |
> >> PG_A | PG_M | pmap_cache_bits(m->md.pat_mode, 0);
> >> invlcaddr(sysmaps->CADDR2);
> >> _______________________________________________
> >> [email protected] mailing list
> >> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> >> To unsubscribe, send any mail to "[email protected]"
pgp9RnaKGdSYD.pgp
Description: PGP signature
