On Sun, Jan 18, 2009 at 5:39 PM, Brad <bst...@aspirinsoftware.com> wrote:
> Well if I do fsstat mountpoint on all the filesystems in the ZFS pool, then I 
> guess my aggregate number for read and write bandwidth should equal the 
> aggregate numbers for the pool? Yes?
>
> The downside is that fsstat has the same granularity issue as zpool iostat. 
> What I'd really like is nread and nwrite numbers instead of r/s w/s. That 
> way, if I miss some polls I can smooth out the results.

Just yank the raw kstats. This is a little harder than it seems.
Unless you're in the
case where you only have one pool, in which case:

kstat unix:0:vopstats_zfs

will give you the aggregate of all zfs filesystems straight off.

The individual filesystem numbers come from kstats named like so:

kstat unix:0:vopstats_4480002

and you have to match up the device id with the filesystem name from
/etc/mnttab. In the case above, you need to match 4480002, which on
my machine is the following line in /etc/mnttab:

swap    /tmp    tmpfs   xattr,dev=4480002       1232289278

so that's /tmp (not a zfs filesystem, but you should get the idea).

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to