Re: [zfs-discuss] Yet another zfs vs. vxfs comparison...

2007-07-21 Thread Orvar Korvar
Ive heard it is hard to give an correct estimate of the used bytes in ZFS, 
because of this and that. It gives you only an approximate number. I think Ive 
read that in the ZFS administration guide somewhere in the "zpool status" or 
"zfs list" command?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yet another zfs vs. vxfs comparison...

2007-07-21 Thread Mikael Kjerrman
Hi,

thanks for the reply. But there must be a better explanation other than that? 
Otherwise it seems kinda harsh to "loose" 20GB per 1TB and I will most likely 
have to answer this question when we are going to discuss if we are to migrate 
to zfs over vxfs..
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yet another zfs vs. vxfs comparison...

2007-07-21 Thread Matthew Ahrens
Orvar Korvar wrote:
> Ive heard it is hard to give an correct estimate of the used bytes in ZFS,
> because of this and that. It gives you only an approximate number. I think
> Ive read that in the ZFS administration guide somewhere in the "zpool
> status" or "zfs list" command?

That is not correct; the amount of space used is accurate.  However, as 
documented in the zfs(1m) manpage, it reflects the space currently used on 
disk, and not any pending changes.  So for example after writing / deleting a 
file, the space used may not reflect the change until a few seconds have passed.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yet another zfs vs. vxfs comparison...

2007-07-21 Thread Matthew Ahrens
Mikael Kjerrman wrote:
> Hi,
> 
> sorry if I am brining up old news, but I couldn't find a good answer 
> searching the previous posts (My mom always says I am bad with finding things 
> :)
> 
> However I noticed a difference when creating a zfs filesystem compared with a 
> vxfs filesystem in the available size. ie.
> 
> ZFS
> zonedata/zfs   [b]392G[/b]   120G   272G31%/zfs

ZFS sets aside some space for allocation efficiency (about 1.6% -- see 
dsl_pool_adjustedsize()).

Note that "zpool list" will show the amount of space actually used/free, 
without taking into account this set-aside space, quotas, or reservations.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] General recommendations on raidz groups of different sizes

2007-07-21 Thread Haudy Kazemi
How would one calculate system reliability estimates here? One is a RAIDZ 
set of 6 disks, the other a set of 8. The reliability of each RAIDZ set by 
itself isn't too hard to calculate, but put together, especially since 
they're different sizes, I don't know.

On Jul 19 2007, Richard Elling wrote:

>After a cup of French coffee, I feel strong enough to recommend :-)
>
>David Smith wrote:
>> What are your thoughts or recommendations on having a zpool made up of 
>> raidz groups of different sizes? Are there going to be performance 
>> issues?
>
> It is more complicated and, in general, more complicated is a bad thing. 
> But in your example, with only 2 top-level vdevs, it isn't overly 
> complicated.
>
> Performance issues will be difficult to predict because this hasn't been 
> studied. With the gazillions of possible permutations, it is not likely 
> to be extensively characterized. But it if it works for you, then be 
> happy :-)
>  -- richard
>
>> For example:
>> 
>>   pool: testpool1
>>  state: ONLINE
>>  scrub: none requested
>> config:
>> 
>> NAME STATE READ WRITE CKSUM
>> testpool1 ONLINE 0 0 0
>> 
>>   raidz1 ONLINE 0 0 0
>> c12t600A0B800029E5EA07234685122Ad0 ONLINE 0 0 0
>> c12t600A0B800029E5EA07254685123Cd0 ONLINE 0 0 0
>> c12t600A0B800029E5EA072F46851256d0 ONLINE 0 0 0
>> c12t600A0B800029E5EA073146851266d0 ONLINE 0 0 0
>> c12t600A0B800029E5EA073746851278d0 ONLINE 0 0 0
>> c12t600A0B800029E5EA074146851292d0 ONLINE 0 0 0
>> c12t600A0B800029E5EA0747468512B6d0 ONLINE 0 0 0
>> c12t600A0B800029E5EA0749468512C2d0 ONLINE 0 0 0
>>   raidz1 ONLINE 0 0 0
>> c12t600A0B800029E5EA074F468512E0d0 ONLINE 0 0 0
>> c12t600A0B800029E5EA0751468512E8d0 ONLINE 0 0 0
>> c12t600A0B800029E5EA07574685130Cd0 ONLINE 0 0 0
>> c12t600A0B800029E5EA075946851318d0 ONLINE 0 0 0
>> c12t600A0B800029E5EA075F4685132Ed0 ONLINE 0 0 0
>> c12t600A0B800029E5EA076546851342d0 ONLINE 0 0 0
>> 
>> 
>> Thanks,
>> 
>> David
>>  
>>  
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>___
>zfs-discuss mailing list
>zfs-discuss@opensolaris.org
>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss