Matty's email at 8/8/2005 3:52 PM, said:
Howdy,

While reading through Solaris Internals this weekend, I came to the
section on UFS direct I/O. The book states that random and large
sequential workloads benefit from direct I/O. Does anyone happen
to know how big a "large sequential" I/O needs to be to benefit from
direct I/O? Are there any advantages to using direct I/O with volumes
devoted to Oracle redo/undo and archive logs? I have read that
it is best to avoid direct I/O with redo/undo, since the file system
will cluster small writes, and boost total throughput (especially during log switches). I have also read that due to the transient nature of redo/undo, the CPU and memory resources devoted to creating the pages would be wasted, since these pages would not be re-used for future reads/writes. Has anyone sat down and looked at direct I/O in depth? Any idea which workloads (if any) work best with redo/undo on UFS direct I/O file systems? If there is a set of documentation that explains this, please let me know.

Thanks,
- Ryan

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org



There is an easy way to think about this in the case of Oracle. Use direct I/O anywhere Oracle uses O_DSYNC. This is a pretty good rule of thumb that will be true 99% of the time. This means data, redo, and control files all get direct I/O and archive does not. The presence of O_DSYNC is going to cause UFS to "break" all of the rules you are familiar with. For instance, no clustering with O_DSYNC and buffered I/O. This is the configuration I use on smallish systems all the way up to fully loaded 25K's.

Be on the lookout in the (hopefully) near future for a fix to direct I/O that will make it behave the way it really should ;) I'll give details later.

Thanks,

Jarod

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to