On Wed, Dec 12, 2007 at 10:54:29AM -0600, Scott Wood wrote: > Anton Vorontsov wrote: > >On Wed, Dec 12, 2007 at 10:40:35AM -0600, Scott Wood wrote: > >>Not enough to be worth the complexity compared to the overhead of NAND > >>access -- especially in the likely case of a non-SMP build. > > > >I'm allowing UPM access from the IRQ handlers (because nothing prevents > >this, so why deny?). Thus locks are needed even on non-SMP build, > > No, it just needs to disable interrupts. > Which is what locks do on non-SMP. > The overhead of this is not worth 30 lines of code to avoid.
Well, speaking of overhead. There could be a lot of fsl_upm_run_pattern calls between _start and _end. In NAND case it's maximum 3, plus they're indirect (i.e. NAND layer calls them via pointers to cmdfunc, and cmdfunc calls run_patterns). Each out_X is another sync, and all that time we're holding a lock with local IRQs disabled. fsl upm infrastructure isn't only for NAND though, so I might imagine use cases when there might be more run_patterns between start and end. As the compromise I might suggest this: forbid pattern_start/pattern_end from the ISRs (by marking them as might_sleep()), and replace _irqsave spinlock with simple spinlock. That way on UP we don't lose anything, but on SMP we still have an overhead in case of single used UPM. :-/ Given that, personally I'd want to lockless variant to stay. So, you still want to get rid of it? Much thanks, -- Anton Vorontsov email: [EMAIL PROTECTED] backup email: [EMAIL PROTECTED] irc://irc.freenode.net/bd2 _______________________________________________ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev