Bruce Evans wrote:
The sched_locks are now of course thread_locks.  I put the vm unlock
before the thread lock since the above order seems to risk a LOR.  That
may have been a mistake -- we would prefer not to be switched after
deciding to do it ourself.

No, there can't be this LOR since thread_lock is a spinmutex while vm_page_queue_mtx is a sleepable mutex, so for our well known-rules about locking you cannot have the opposite situation. And if you don't follow John's pattern I think you get a race too since there is a breaking point in the protected path.

Index: vm_zeroidle.c
===================================================================
RCS file: /usr/cvs/src/sys/vm/vm_zeroidle.c,v
retrieving revision 1.45
diff -u -r1.45 vm_zeroidle.c
--- vm_zeroidle.c       18 May 2007 07:10:50 -0000      1.45
+++ vm_zeroidle.c       7 Jun 2007 14:58:39 -0000
@@ -110,8 +110,10 @@
       if (m != NULL && (m->flags & PG_ZERO) == 0) {
               vm_pageq_remove_nowakeup(m);
               mtx_unlock(&vm_page_queue_free_mtx);
+               critical_exit();
               pmap_zero_page_idle(m);
               mtx_lock(&vm_page_queue_free_mtx);
+               critical_enter();
               m->flags |= PG_ZERO;
               vm_pageq_enqueue(PQ_FREE + m->pc, m);
               ++vm_page_zero_count;

Next I will try this.  I put the critical_exit() before the vm unlock.
mtx_unlock() should be allowed to switch if it wants.  However, we
would prefer to keep context switches disabled in the above -- just drop
the lock so that other CPUs can proceed.

This is good since mtx_unlock will force a preemption point here.

Attilio
_______________________________________________
cvs-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/cvs-all
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to