Howdy,

Cross-posted to: zfs-discuss@opensolaris.org

I am playing around with the latest Read-Write ZFS on Leopard and am
confused about why the available size of my admittedly tiny test pool
(100 MB) is showing at ~2/3 (63 MB) of the expected capacity.  I used
mkfile to create test "disks".  Is this due to normal ZFS overhead? If
so, how can I list / view / examine these properties?  I don't think
it's compression related (BTW, is compression ON or OFF by default in
OS X's current implementation of ZFS?).

tcpb:jpool avatar$ uname -a
Darwin tcpb.local 9.1.0 Darwin Kernel Version 9.1.0: Wed Oct 31
17:48:21 PDT 2007; root:xnu-1228.0.2~1/RELEASE_PPC Power Macintosh

tcpb:jpool avatar$ sw_vers
ProductName:    Mac OS X
ProductVersion: 10.5.1
BuildVersion:   9B18

tcpb:aguas avatar$ kextstat | grep zfs
  125    0 0x3203a000 0xcf000    0xce000    com.apple.filesystems.zfs
(6.0) <7 6 5 2>

I created a test pool in the "aguas" directory on an external firewire
HDD:

cd to my zfs test directory: "aguas" on an external HDD..
cd /Volumes/jDrive/aguas/

Create 5 100MB files to act as "Disks" in my Pool...
sudo mkfile 100M disk1
sudo mkfile 100M disk2
sudo mkfile 100M disk3
sudo mkfile 100M disk4
sudo mkfile 100M disk5

Create MIRROR'd Pool, "jpool" using 1st two Disks...
sudo zpool create jpool mirror /Volumes/jDrive/aguas/disk1 /Volumes/
jDrive/aguas/disk2

zpool list =====>
NAME                    SIZE    USED   AVAIL    CAP  HEALTH
ALTROOT
jpool                      95.5M    151K   95.4M     0%    ONLINE
-

zpool status =====>
 pool: jpool
 state: ONLINE
 scrub: none requested
config:

        NAME                             STATE     READ WRITE CKSUM
        jpool                            ONLINE       0     0     0
          mirror                         ONLINE       0     0     0
            /Volumes/jDrive/aguas/disk1  ONLINE       0     0     0
            /Volumes/jDrive/aguas/disk2  ONLINE       0     0     0

errors: No known data errors
=====

Added a spare:
sudo zpool add jpool spare /Volumes/jDrive/aguas/disk5

zpool status =====>
  pool: jpool
 state: ONLINE
 scrub: none requested
config:

        NAME                             STATE     READ WRITE CKSUM
        jpool                            ONLINE       0     0     0
          mirror                         ONLINE       0     0     0
            /Volumes/jDrive/aguas/disk1  ONLINE       0     0     0
            /Volumes/jDrive/aguas/disk2  ONLINE       0     0     0
        spares
          /Volumes/jDrive/aguas/disk5    AVAIL

errors: No known data errors
=====

"jpool" NOW SHOWS UP ON THE FINDER...

tcpb:aguas avatar$ df -h
Filesystem      Size   Used  Avail Capacity  Mounted on
/dev/disk0s3   112Gi  103Gi  8.7Gi    93%    /
devfs          114Ki  114Ki    0Bi   100%    /dev
fdesc          1.0Ki  1.0Ki    0Bi   100%    /dev
map -hosts       0Bi    0Bi    0Bi   100%    /net
map auto_home    0Bi    0Bi    0Bi   100%    /home
/dev/disk1s14   56Gi   50Gi  5.4Gi    91%    /Volumes/jDrive ONE
/dev/disk1s10   75Gi   68Gi  7.3Gi    91%    /Volumes/jDrive
/dev/disk1s12   55Gi   52Gi  2.5Gi    96%    /Volumes/Free 55
jpool           63Mi   59Ki   63Mi     1%    /Volumes/jpool
=====

OK, GIVEN:
zpool list =====>
NAME                    SIZE    USED   AVAIL    CAP  HEALTH
ALTROOT
jpool                      95.5M    151K   95.4M     0%    ONLINE
-

*WHY* ONLY 63MB?!?:
jpool           63Mi   59Ki   63Mi     1%    /Volumes/jpool

More info (I turned COMPRESSION on after I noticed the
discrepancy.) ...

tcpb:jpool avatar$ sudo zfs get all jpool =====>
NAME   PROPERTY       VALUE                  SOURCE
jpool  type           filesystem             -
jpool  creation       Tue Nov 20 14:48 2007  -
jpool  used           392K                   -
jpool  available      63.1M                  -
jpool  referenced     59K                    -
jpool  compressratio  1.00x                  -
jpool  mounted        yes                    -
jpool  quota          none                   default
jpool  reservation    none                   default
jpool  recordsize     128K                   default
jpool  mountpoint     /Volumes/jpool         default
jpool  sharenfs       off                    default
jpool  checksum       on                     default
jpool  compression    on                     local
jpool  atime          on                     default
jpool  devices        on                     default
jpool  exec           on                     default
jpool  setuid         on                     default
jpool  readonly       off                    default
jpool  zoned          off                    default
jpool  snapdir        hidden                 default
jpool  aclmode        groupmask              default
jpool  aclinherit     secure                 default
jpool  canmount       on                     default
jpool  shareiscsi     off                    default
jpool  xattr          on                     default
jpool  copies         1                      default
=====

So, zpool list =
NAME                    SIZE    USED   AVAIL    CAP  HEALTH
ALTROOT
jpool                  95.5M    479K   95.0M     0%  ONLINE     -

While zfs list =
NAME    USED  AVAIL  REFER  MOUNTPOINT
jpool   392K  63.1M    59K  /Volumes/jpool

-and-

tcpb:aguas avatar$ df -h
jpool           63Mi   59Ki   63Mi     1%    /Volumes/jpool


OK, so the pool (as referenced by zpool) shows the expected SIZE but
the filesystem part (as reference by zfs and df) shows ~2/3 the
expected SIZE.

Hmmm...this leads to something about ZFS that I was confused about: I
thought that I had to use the zfs command to create a filesystem on my
pool before it was useable.  I was surprised when "jpool" popped up on
my Desktop as a removable drive as soon as I created it with zpool.
Is there a default filesystem created automatically when the pool is
created? If so, does it cover the whole pool?

Again, the pool = 100MB but the fs = 64MB.

Maybe this is just fs overhead (if so, what/when/how to examine?) and
maybe my pool is too small.  If the test pool was a more reasonable
10GB and the 40MB is normal fs overhead than we are talking 0.4% as
opposed to 33%(40MB) on a test pool of 100MB.  If my test pool was
10GB, then I probably wouldn't even have noticed.

OK! Thanks for reading this if you got all the way down here!

SUMMARY:
1) Why the difference between pool size and fs capacity?
2) If this is normal overhead, then how to you examine these aspects
of the fs (commands to use, background links to read, etc. (If you say
RTFM then please supply a page number for "817-2271.pdf"))?
3) What's the relationship between pools (zpool) and filesystems (zfs
command)?  / Is there a default fs created hwne the pool is created?
4) BONUS QUESTION: Is Sun currently using / promoting / shipping
hardware that *boots* ZFS? (e.g. last I checked even stuff like
"Thumper" did not use ZFS for the 2 mirror'd boot drives (UFS?) but
used ZFS for the 10,000 other drives (OK, maybe there aren't 10,000
drive but there sure are a lot)).
5) BONUS QUESTION #2: How does a frustrated yet extremely seasoned Mac/
OS X technician with a terrific Solaris background find happiness by
landing a job at his other favorite company, Sun? (My "friend" wants
to know.)
6) FINAL QUESTION (2 parts): (a) When will we see default booting to
ZFS? & (b) [When] will we see ZFS as the default fs on OS X?

Thanks!

-Anonymous Mac Tech
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to