On 07/18/2013 10:54 PM, David Herrmann wrote: > Hi > > On Thu, Jul 18, 2013 at 1:24 PM, Thomas Hellstrom <thellstrom at vmware.com> > wrote:
... >> >> I think that if there are good reasons to keep locking internal, I'm fine >> with that, (And also, of course, with >> Daniel's proposal). Currently the add / remove / lookup paths are mostly >> used by TTM during object creation and >> destruction. >> >> However, if the lookup path is ever used by pread / pwrite, that situation >> might change and we would then like to >> minimize the locking. > I tried to keep the change as minimal as I could. Follow-up patches > are welcome. I just thought pushing the lock into drm_vma_* would > simplify things. If there are benchmarks that prove me wrong, I'll > gladly spend some time optimizing that. In the general case, one reason for designing the locking outside of a utilities like this, is that different callers may have different requirements. For example, the call path is known not to be multithreaded at all, or the caller may prefer a mutex over a spinlock for various reasons. It might also be that some callers will want to use RCU locking in the future if the lookup path becomes busy, and that would require *all* users to adapt to RCU object destruction... I haven't looked at the code closely enough to say that any of this applies in this particular case, though. Thanks, Thomas > Thanks > David