Hi:
        I want to ask a question about CEPH_IOC_SYNCIO flag.
        I know that when using O_SYNC flag or O_DIRECT flag, write call 
executes in other two code paths different than using CEPH_IOC_SYNCIO flag.
        And I find the comments about CEPH_IOC_SYNCIO here:

        /*
         * CEPH_IOC_SYNCIO - force synchronous IO
         *
         * This ioctl sets a file flag that forces the synchronous IO that
         * bypasses the page cache, even if it is not necessary.  This is
         * essentially the opposite behavior of IOC_LAZYIO.  This forces the
         * same read/write path as a file opened by multiple clients when one
         * or more of those clients is opened for write.
         *
         * Note that this type of sync IO takes a different path than a file
         * opened with O_SYNC/D_SYNC (writes hit the page cache and are
         * immediately flushed on page boundaries).  It is very similar to
         * O_DIRECT (writes bypass the page cache) except that O_DIRECT writes
         * are not copied (user page must remain stable) and O_DIRECT writes
         * have alignment restrictions (on the buffer and file offset).
         */
        #define CEPH_IOC_SYNCIO _IO(CEPH_IOCTL_MAGIC, 5)

        My question is: 
        1."This forces the same read/write path as a file opened by multiple 
clients when one or more of those clients is opened for write." -- Does this 
mean multiple clients can execute in the same code path when they all use the 
CEPH_IOC_SYNCIO flag? Will the use of CEPH_IOC_SYNCIO in all clients bring 
effects such as coherency and performance?
        2."...except that O_DIRECT writes are not copied (user page must remain 
stable)" -- As I know when threads write with CEPH_IOC_SYNCIO flag, the write 
call will block until ceph osd and mds send back responses. So even with 
CEPH_IOC_SYNCIO flag(the user pages are not locked here, I guess), but the user 
cannot use these pages. How can the use of CEPH_IOC_SYNCIO flag make better use 
of user space memory?

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to