On Tue, 11 Jan 2011, Andy wrote:

I am monitoring the percent busy, and the data in/out on the physical disks and I am seeing close to 100% busy often. Thats ok, these things can happen! ;). The thing I don't understand is why during these busy times that zpool iostat shows very low IO. During a 10 second period the physical disks are close to 100% busy and show about 5MB/sec IO, during the same period zpool shows something like this:

What function is the system performing when it is so busy?

So this is showing mostly reads, but nothing very taxing. Additionally my ARC cache is not being fully utilised:

ARC Size:
       Current Size:                   28.75%  603.23M (arcsize)
       Target Size: (Adaptive)         30.35%  636.68M (c)
       Min Size (Hard Limit):          12.50%  262.26M (c_min)
       Max Size (High Water):          ~8:1    2098.08M (c_max)

So more ARC isn't going to help me.

Wrong conclusion. I am not sure what the percentages are percentages of (total RAM?), but 603MB is a very small ARC. FreeBSD pre-assigns kernel memory for zfs so it is not dynamically shared with the kernel as it is with Solaris. The FreeBSD zfs tuning is quite FreeBSD specific. Even if you have a huge amount of physical RAM, you may need to tune FreeBSD to actually use it. Post a query on freebsd...@freebsd.org if the FreeBSD zfs tuning pages don't reveal what you need.

My goal being, what is the best way to improve my system performance, more RAM, more disks, dedicated LOG or ARC2 disks etc etc. The load appears in zpool iostat to be mostly read, but my ARC isnt even fully utilised so that to be seems an odd one.

The ARC is "adaptive" so you should not assume that its objective is to try to absorb your hard drive. It should not want to cache data which is rarely accessed. Regardless, your ARC size may actually be constrained by default FreeBSD kernel tunings.

I know I don't have high end hardware, and only 2 disks. But I'd like to understand the apparent disparity between the physical disk IO and the zpool IO output. That way I can better decide if I should, for example, add 2x SSDs for ZIL (and L2ARC if it was needed, but seems not to be) or replace the existing disks and put instead 2x 2 way mirror on slightly better 7k SATA drives.

The type of drives you are using have very poor seek performance. Higher RPM drives would surely help. Stuffing lots more memory in your system and adjusting the kernel so that zfs can use a lot more of it is likely to help dramatically. Zfs loves memory.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to