On 05/30/2010 02:51 PM, Brandon High wrote:
> On Sun, May 30, 2010 at 2:37 PM, Sandon Van Ness <san...@van-ness.com> wrote:
>   
>> ZFS:
>> r...@opensolaris: 11:22 AM :/data# df -k /data
>>     
> 'zfs list' is more accurate than df, since it will also show space
> used by snapshots. eg:
> bh...@basestar:~$ df -h /export/home/bhigh
> Filesystem             size   used  avail capacity  Mounted on
> tank/export/home/bhigh
>                        5.3T   8.2G   2.8T     1%    /export/home/bhigh
> bh...@basestar:~$ zfs list tank/export/home/bhigh
> NAME                     USED  AVAIL  REFER  MOUNTPOINT
> tank/export/home/bhigh  51.0G  2.85T  8.16G  /export/home/bhigh
>
>   
>> zpool list shows the raw capacity right?
>>     
> Yes. It shows the raw capacity, including space that will be used for
> parity. Its USED column includes space used by all active datasets and
> snapshots.
>
>   
>> So basically with JFS I see no decrease in total volume size but a huge
>> difference on ZFS. Is this normal/expected? Can anything be disabled to
>> not lose 500-600 GB of space?
>>     
> Are you using any snapshots? They'll consume space.
>
> What is the recordsize, and what kind of data are you storing? Small
> blocks or lots of small files (< 128k) will have more overhead for
> metadata.
>
> -B
>
>   

Yeah I know all about issues with snapshots and stuff like this but this
is on a totally new/empty file-system. Its basically over 500 gigabytes
smaller right from the get-go even before any data has ever been written
to it. I would totally expect some numbers to be off on a used
file-system but not so much on a completely brand-new one.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to