Mike DeMarco wrote: > IO bottle necks are usually caused by a slow disk or one that has heavy > workloads reading many small files. Two factors that need to be considered > are Head seek latency and spin latency. Head seek latency is the amount > of time it takes for the head to move to the track that is to be written, > this is a eternity for the system (usually around 4 or 5 milliseconds).
For most modern disks, writes are cached in a buffer. But reads still take the latency hit. The trick is to ensure that writes are committed to media, which ZFS will do for you. > Spin latency is the amount of time it takes for the spindle to spin the > track to be read or written over the head. Ideally you only want to pay the > latency penalty once. If you have large reads and writes going to the disk > then compression may help a little but if you have many small reads or > writes it will do nothing more than to burden your CPU with a no gain > amount of work to do since your are going to be paying Mr latency for each > read or write. > > Striping several disks together with a stripe width that is tuned for your > data model is how you could get your performance up. Stripping has been > left out of the ZFS model for some reason. Where it is true that RAIDZ will > stripe the data across a given drive set it does not give you the option to > tune the stripe width. It is called "dynamic striping" and a write is not compelled to be spread across all vdevs. This opens up an interesting rat hole conversation about whether stochastic spreading is always better than an efficient, larger block write. Our grandchildren might still be arguing this when they enter the retirement home. In general, for ZFS, the top-level dynamic stripe interlace is 1 MByte which seems to fit well with the 128kByte block size. YMMV. > Do to the write performance problems of RAIDZ you may not get a performance > boost from it stripping if your write to read ratio is too high since the > driver has to calculate parity for each write. Write performance for raidz is generally quite good, better than most other RAID-5 implementations which are bit by the read-modify-write cycle (added latency). raidz can pay for this optimization when doing small, random reads, TANSTAAFL. -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss