On Fri, Jul 31, 2015 at 03:08:09PM +0300, Vasiliy Tolstov wrote: > 2015-07-31 14:55 GMT+03:00 Vasiliy Tolstov <v.tols...@selfip.ru>: > > Liu's patch also works for me. But also like in Hitoshi patch breaks > > when using discards in qemu =(. > > > Please wait to performance comparison. As i see Liu's patch may be > more slow then Hitoshi. >
Thanks for your time! Well, as far as I know, my patch would be slightly better in performance wise because it preserves the parallelism of requests. Due to scatter gather IO requests characteristics, we could assume following IO pattern as an illustration: req1 is split into 2 sheep reqs: create(2), create(10) req2 is split into 2 sheep reqs: create(5), create(100) So there are finally 4 sheep requests and with my patch they will be run in parallel by sheep cluster and only 4 unref of objects will be executed internally: update_inode(2), update_inode(10), update_inode(5), update_inode(100) With Hitoshi's patch, however, req1 and req2 will be serialized and only one req is finished then the other one will be sent to sheep and there are 9+96=105 unref of objects will be executed internally. There are still chances data corruption because update_inode(2,10) and update_inode(5,100) will both update the range [5,10], which is a potential problem if the overlapped range has different values when the requests are queued with stale data. This is really a several years bug: we should update the inode bits exactly as we create the objects, not update the bits we don't touch at all. This bug isn't revealed for a long time because most of the time, min == max in create_inode(min, max) and before we introduction of generation reference counting to the snapshot reference mechanism, updating inode bit with 0 won't cause a remove request in sheepdog. I'm also concerned with the complete new mechanism since current request handling mechanism is solid as time goes by. It exists for years. The complete new stuff might need a long time to stablize and need to fix the possible side effect we don't know yet. Thanks, Yuan