Hi,

  I've been looking at a performance issue on my system, its FreeBSD 8.1, zpool 
version 14 (hope Im allowed to ask about FreeBSD on here?!). Anyway I think its 
quite a general question. The pool contains a single 2 disk mirror, using 5k 
SATA drives.

I am monitoring the percent busy, and the data in/out on the physical disks and 
I am seeing close to 100% busy often. Thats ok, these things can happen! ;). 
The thing I don't understand is why during these busy times that zpool iostat 
shows very low IO. During a 10 second period the physical disks are close to 
100% busy and show about 5MB/sec IO, during the same period zpool shows 
something like this:

mx0         52.9G   691G     75      0   195K      0
mx0         52.9G   691G    178      0   430K      0
mx0         52.9G   691G    200      1   513K  35.9K
mx0         52.9G   691G    121      6   301K   688K
mx0         52.9G   691G    207      0   526K      0
mx0         52.9G   691G    281      0   708K      0
mx0         52.9G   691G    249      0   676K      0
mx0         52.9G   691G    191      0   518K      0
mx0         52.9G   691G    169      0   452K   128K
mx0         52.9G   691G    144      2   363K   383K
mx0         52.9G   691G    140      0   365K      0
mx0         52.9G   691G    151      0   395K   128K


So this is showing mostly reads, but nothing very taxing. Additionally my ARC 
cache is not being fully utilised:

ARC Size:
        Current Size:                   28.75%  603.23M (arcsize)
        Target Size: (Adaptive)         30.35%  636.68M (c)
        Min Size (Hard Limit):          12.50%  262.26M (c_min)
        Max Size (High Water):          ~8:1    2098.08M (c_max)

So more ARC isn't going to help me.

My goal being, what is the best way to improve my system performance, more RAM, 
more disks, dedicated LOG or ARC2 disks etc etc. The load appears in zpool 
iostat to be mostly read, but my ARC isnt even fully utilised so that to be 
seems an odd one.

I know I don't have high end hardware, and only 2 disks. But I'd like to 
understand the apparent disparity between the physical disk IO and the zpool IO 
output. That way I can better decide if I should, for example, add 2x SSDs for 
ZIL (and L2ARC if it was needed, but seems not to be) or replace the existing 
disks and put instead 2x 2 way mirror on slightly better 7k SATA drives.

Anyone able to throw some light on what might be going on under the hood?

thanks! Andy.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to