On Wed, 23 Mar 2005, Ingo Molnar wrote:
> > * Ingo Molnar <[EMAIL PROTECTED]> wrote: > > > That callback will be queued on CPU#2 - while the task still keeps > > current->rcu_data of CPU#1. It also means that CPU#2's read counter > > did _not_ get increased - and a too short grace period may occur. > > > > it seems to me that that only safe method is to pick an 'RCU CPU' when > > first entering the read section, and then sticking to it, no matter > > where the task gets migrated to. Or to 'migrate' the +1 read count > > from one CPU to the other, within the scheduler. > > i think the 'migrate read-count' method is not adequate either, because > all callbacks queued within an RCU read section must be called after the > lock has been dropped - while with the migration method CPU#1 would be > free to process callbacks queued in the RCU read section still active on > CPU#2. > Hi Ingo, Although you can't disable preemption for the duration of the rcu_readlock, what about pinning the process to a CPU while it has the lock. Would this help solve the migration issue? -- Steve - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/