> Is this a bug?
> 
> 
>                capacity     operations    bandwidth
> ed  avail   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> zfs         14.2G  1.35T      0     62      0  5.46M
>   raidz2    14.2G  1.35T      0     62      0  5.46M
>   c0d0        -      -      0     60      0  1.37M
>   c1d0        -      -      0     58      0  1.37M
>   c2d0        -      -      0     60      0  1.37M
>   c3d0        -      -      0     58      0  1.37M
>   c7d0        -      -      0     58      0  1.37M
>   c8d0        -      -      0     49      0  1.37M
> --------  -----  -----  -----  -----  -----  -----
> 
> 
> This shows 1.35TB of space
> 
> df -h /export/home/amy/
> 
> Filesystem             size   used  avail capacity
>  Mounted on
> fs/home/amy           915G   9.7G   905G     2%
>    /export/home/amy
> 
> 6x250gb would get about 1.3 but since its raidz2 its
> 4/6 right, so zpool iostatus is reporting wrong.

I believe that is a known issue "6308817 discrepancy between zfs list and zpool 
usage stats".  I saw the same behaviour with plain RAIDZ.  See this post: 
http://www.opensolaris.org/jive/thread.jspa?messageID=42716&#42716
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to