On Jan 3, 2013, at 12:33 PM, Eugen Leitl <eu...@leitl.org> wrote:

> On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote:
>> 
>> Happy $holidays,
>> 
>> I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
> 
> Just a little update on the home NAS project.
> 
> I've set the pool sync to disabled, and added a couple
> of
> 
>       8. c4t1d0 <ATA-INTELSSDSA2M080-02G9 cyl 11710 alt 2 hd 224 sec 56>
>          /pci@0,0/pci1462,7720@11/disk@1,0
>       9. c4t2d0 <ATA-INTELSSDSA2M080-02G9 cyl 11710 alt 2 hd 224 sec 56>
>          /pci@0,0/pci1462,7720@11/disk@2,0

Setting sync=disabled means your log SSDs (slogs) will not be used.
 -- richard

> 
> I had no clue what the partitions names (created with napp-it web
> interface, a la 5% log and 95% cache, of 80 GByte) were and so
> did a iostat -xnp
> 
>    1.4    0.3    5.5    0.0  0.0  0.0    0.0    0.0   0   0 c4t1d0
>    0.1    0.0    3.7    0.0  0.0  0.0    0.0    0.5   0   0 c4t1d0s2
>    0.1    0.0    2.6    0.0  0.0  0.0    0.0    0.5   0   0 c4t1d0s8
>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.2   0   0 c4t1d0p0
>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c4t1d0p1
>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c4t1d0p2
>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c4t1d0p3
>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c4t1d0p4
>    1.2    0.3    1.4    0.0  0.0  0.0    0.0    0.0   0   0 c4t2d0
>    0.0    0.0    0.6    0.0  0.0  0.0    0.0    0.4   0   0 c4t2d0s2
>    0.0    0.0    0.7    0.0  0.0  0.0    0.0    0.4   0   0 c4t2d0s8
>    0.1    0.0    0.0    0.0  0.0  0.0    0.0    0.2   0   0 c4t2d0p0
>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c4t2d0p1
>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c4t2d0p2
> 
> then issued
> 
> # zpool add tank0 cache /dev/dsk/c4t1d0p1 /dev/dsk/c4t2d0p1
> # zpool add tank0 log mirror /dev/dsk/c4t1d0p0 /dev/dsk/c4t2d0p0
> 
> which resulted in 
> 
> root@oizfs:~# zpool status
>  pool: rpool
> state: ONLINE
>  scan: scrub repaired 0 in 0h1m with 0 errors on Wed Jan  2 21:09:23 2013
> config:
> 
>        NAME        STATE     READ WRITE CKSUM
>        rpool       ONLINE       0     0     0
>          c4t3d0s0  ONLINE       0     0     0
> 
> errors: No known data errors
> 
>  pool: tank0
> state: ONLINE
>  scan: scrub repaired 0 in 5h17m with 0 errors on Wed Jan  2 17:53:20 2013
> config:
> 
>        NAME                       STATE     READ WRITE CKSUM
>        tank0                      ONLINE       0     0     0
>          raidz3-0                 ONLINE       0     0     0
>            c3t5000C500098BE9DDd0  ONLINE       0     0     0
>            c3t5000C50009C72C48d0  ONLINE       0     0     0
>            c3t5000C50009C73968d0  ONLINE       0     0     0
>            c3t5000C5000FD2E794d0  ONLINE       0     0     0
>            c3t5000C5000FD37075d0  ONLINE       0     0     0
>            c3t5000C5000FD39D53d0  ONLINE       0     0     0
>            c3t5000C5000FD3BC10d0  ONLINE       0     0     0
>            c3t5000C5000FD3E8A7d0  ONLINE       0     0     0
>        logs
>          mirror-1                 ONLINE       0     0     0
>            c4t1d0p0               ONLINE       0     0     0
>            c4t2d0p0               ONLINE       0     0     0
>        cache
>          c4t1d0p1                 ONLINE       0     0     0
>          c4t2d0p1                 ONLINE       0     0     0
> 
> errors: No known data errors
> 
> which resulted in bonnie++
> befo':
> 
> NAME   SIZE    Bonnie  Date(y.m.d)     File    Seq-Wr-Chr      %CPU    
> Seq-Write       %CPU    Seq-Rewr        %CPU    Seq-Rd-Chr      %CPU    
> Seq-Read        %CPU    Rnd Seeks       %CPU    Files   Seq-Create      
> Rnd-Create
> rpool  59.5G   start   2012.12.28      15576M  24 MB/s         61      47 
> MB/s         18      40 MB/s         19      26 MB/s         98      273 MB/s 
>        48      2657.2/s        25      16      12984/s         12058/s
> tank0  7.25T   start   2012.12.29      15576M  35 MB/s         86      145 
> MB/s        48      109 MB/s        50      25 MB/s         97      291 MB/s  
>       53      819.9/s         12      16      12634/s         9194/s
> 
> aftuh:
> 
> -Wr-Chr        %CPU    Seq-Write       %CPU    Seq-Rewr        %CPU    
> Seq-Rd-Chr      %CPU    Seq-Read        %CPU    Rnd Seeks       %CPU    Files 
>   Seq-Create      Rnd-Create
> rpool  59.5G   start   2012.12.28      15576M  24 MB/s         61      47 
> MB/s         18      40 MB/s         19      26 MB/s         98      273 MB/s 
>        48      2657.2/s        25      16      12984/s         12058/s
> tank0  7.25T   start   2013.01.03      15576M  35 MB/s         86      149 
> MB/s        48      111 MB/s        50      26 MB/s         98      404 MB/s  
>       76      1094.3/s        12      16      12601/s         9937/s
> 
> Does the layout make sense? Do the stats make sense, or is there still 
> something very wrong
> with that pool?
> 
> Thanks. 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--

richard.ell...@richardelling.com
+1-760-896-4422









_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to