Hi all!

First off, if this has been discussed, please point me in that direction. I have searched high and low and really can't find much info on the subject.

We have a large-ish (200gb) UFS file system on a Sun Enterprise 250 that is being shared with samba (lots of files, mostly random IO). OS is Solaris 10u3. Disk set is 7x36gb 10k scsi, 4 internal 3 external.

For several reasons we currently need to stay on UFS and can't switch to ZFS proper. So instead we have opted to do UFS on a zvol using raid-z, in lieu of UFS on SVM using raid5 (we want/need raid protection). This decision was made because of the ease of disk set portability of zpools, and also the [assumed] performance benefit vs SVM.

Anyways, I've been pondering the volblocksize parameter, and trying to figure out how it interacts with UFS. When the zvol was setup, I took the default 8k size. Since UFS uses an 8k blocksize, this seemed to be a reasonable choice. I've been thinking more about it lately, and have also read that UFS will do R/W in bigger than 8k blocks when it can, up to maxcontig (default of 16, ie 128k).

This presented me with several questions: Would a volblocksize of 128k and maxcontig 16 provide better UFS performance? Overall, or only in certain situations (ie only for sequential IO)? Would increasing the maxcontig beyond 16 make any difference (good, bad or indifferent) if the underlying device is limited to 128k blocks?

What exactly does volblocksize control? My observations thus far indicate that it simply sets a max block size for the [virtual] zvol device. Changing volblocksize does NOT seem to have an impact on IOs to the underlying physical disks, which always seem to float in the 50-110k range). How does volblocksize affect IO that is not of a set block size?

Finally, why does volblocksize only affects raidz and mirror devices? It seems to have no effect on 'simple' devices, even though I presume striping is still used there. That is also assuming that volblocksize interacts with striping.

Any answers or input is greatly appreciated.

Thanks much!
-Brian

--
---------------------------------------------------
Brian H. Nelson         Youngstown State University
System Administrator   Media and Academic Computing
             bnelson[at]cis.ysu.edu
---------------------------------------------------

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to