IHAC using ZFS in production, and he's opening up some files with the
O_SYNC flag. This affects subsequent write()'s by providing
synchronized I/O file integrity completion. That is, each write(2) will
wait for both the file data and file status to be physically updated.
Because of this, he's seeing some delays on the file write()'s. This is
verified with dtrace. He's got a storage array with a read/write cache
already. What does ZFS introduce to this O_SYNC flag? Is ZFS doing
some caching itself, too? Are there settings we got by default when we
created the ZFS pools that already give us the equivalent of O_SYNC? Is
there something we should consider turning on or off with regard to ZFS?
My feeling is that in an effort to make these write()'s so that they go
completely to the disk, we may have gone overboard with one or more of
the following:
* setting O_SYNC on the file open() to affect the write()'s
* using ZFS
* using a storage array with a battery backed up read/write cache
Can we eliminate one or more of these and still get the file integrity
we want?
PRD;IANOTA
Regards,
Pat
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss