Well if I do fsstat mountpoint on all the filesystems in the ZFS pool, then I 
guess my aggregate number for read and write bandwidth should equal the 
aggregate numbers for the pool? Yes?

The downside is that fsstat has the same granularity issue as zpool iostat. 
What I'd really like is nread and nwrite numbers instead of r/s w/s. That way, 
if I miss some polls I can smooth out the results.

kstat -c disk sd::: is interesting, but seems to be only for locally-attached 
disks, right? I am using iSCSI although soon will also have pools with local 
disks.

For device data, I'd really like the per-pool and per-pool per device 
breakdowns provided by zpool iostat, if only it weren't summarized in a 
5-character field. Perhaps I should simply be asking for sample code that 
accesses libzfs....

I have rolled my own cron scheduler so I can have the sub-second queries.

Thanks for the info!
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to