On Oct 16, 2006, at 07:39, Darren J Moffat wrote:
Noel Dellofano wrote:
I don't understand why you can't use 'zpool status'? That will
show the pools and the physical devices in each and is also a
pretty basic command. Examples are given in the sysadmin docs and
manpages for ZFS on the opensolaris ZFS community page.
I realize it's not quite the same command as in UFS, and it's
easier when things remain the same, but it's a different
filesystem so you need some different commands that make more
sense for how it's structured. The idea being hopefully that soon
zpool and zfs commands will become just as 'intuitive' for people :)
I agree. Whats more with UFS it is very often UFS+SVM so what you
see in df is SVM metadevices not real disks either. In the UFS+SVM
case you need to use metastat(1M) and work your way through that
output to find the actual physical disk.
IMO the whole point of ZFS was abstracting this physical disk thing
away from you when you don't need to use it, and df output IMO is
one place it really doesn't belong.
I agree, and on a deeper level it's simply a matter of which level of
virtualization you want to see and where you want to see it. We
could argue that the devfs tree shows more of the physical device
than the links in /dev, but the syntax of both of these is somewhat
obscure and we've just learned to deal with the c#t#d# model/
structure. When we get into large scale environments and unique
target identifiers - MPxIO or STMS introduced the idea of putting a
GUID in the target string, but this becomes somewhat unwieldy for
many situations since it can often be unclear what storage [device,
port, vdisk, lun] "60060E80047D410000007D4100000408" is really mapped
to. For this you typically have to resort to luxadm(1M) and cfgadm_fp
(1M) commands to determine which device went where.
Then if you're doing storage based volume management, you'd need some
tool to communicate with the array to figure out which virtual disk
is composed of which underlying physical devices. And if you use a
host-based volume management you would need to use those tools as
well (metastat, vxdisk, etc) to see more into the physical structure
of the volumes that were created. Hopefully you've chosen
intelligent names for things, and I would argue that interacting with
an arbitrary name-based structure, or the abstracted structure, has
more of an advantage in reducing some of the underlying visual
complexity.
One would hope that in a few years we would move more toward an SNS
or OSD type approach with devices accessible through a name lookup or
a logical OST name and we could finally move beyond the SCSI
numbering approach with everything displayed in the raw, but I
believe that we're still a long way off.
Now df simply maps from your mount table - so I don't see either the
df options being extended nor the zfs/zpool interface changing ..
rather I see this as an opportunity for us to use more logical and
diverse naming structures for groupings of underlying devices with
new tools to manipulate both.
.je
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss