This may not be a ZFS issue, so please bear with me!

I have 4 internal drives that I have striped/mirrored with ZFS and have an 
application server which is reading/writing to hundreds of thousands of files 
on it, thousands of files @ a time.

If 1 client uses the app server, the transaction (reading/writing to ~80 files) 
takes about 200 ms.  If I have about 80 clients attempting it @ once, it can 
sometimes take a minute or more.  I'm pretty sure its a file I/O bottleneck so 
I want to make sure ZFS is tuned properly for this kind of usage.

The only thing I could think of, so far, is to turn off ZFS compression.  Is 
there anything else I can do?  Here is my "zpool iostat" output:

# zpool iostat 5
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
pool1       5.69G   266G     23     76  1.44M  2.24M
pool1       5.69G   266G     96    259  5.70M  7.25M
pool1       5.69G   266G     98    267  5.73M  7.32M
pool1       5.69G   266G     92    253  5.76M  7.31M
pool1       5.69G   266G     90    254  5.67M  7.43M

and here is regular iostat:

# iostat -xnz 5
                 extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.2    0.0    0.1  0.0  0.0    0.0    0.3   0   0 c0t0d0
    0.0    0.2    0.0    0.1  0.0  0.0    0.0    0.3   0   0 c0t1d0
   20.4  145.0 1315.8 3714.5  0.0  2.8    0.0   16.8   0  21 c0t2d0
   21.4  143.2 1380.2 3711.3  0.0  4.1    0.0   25.1   0  27 c0t3d0
   23.4  138.4 1509.3 3693.0  0.0  1.6    0.0    9.8   0  17 c0t4d0
   20.8  137.8 1341.6 3693.0  0.0  2.3    0.0   14.7   0  21 c0t5d0
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to