Carson Gaspar wrote:
> Richard Elling wrote:
> ...
>
>> Carson Gaspar wrote:
>> > Except sar sucks. It's scheduled via cron, and is too coarse grained for
>> > many purposes (10 minute long samples average out almost everything
>> > interesting).
>>
>> There is a world of difference between t
Richard Elling wrote:
...
> Carson Gaspar wrote:
> > Except sar sucks. It's scheduled via cron, and is too coarse grained for
> > many purposes (10 minute long samples average out almost everything
> > interesting).
>
> There is a world of difference between the tools needed to perform
> debugg
On Sun, Jan 18, 2009 at 8:25 PM, Richard Elling wrote:
> Peter Tribble wrote:
>> See fsstat, which is based upon kstats. One of the thing I want to do with
>> JKstat is correlate filesystem operations with underlying disk operations.
>> The
>> hard part is actually connecting a filesystem to the u
Peter Tribble wrote:
> On Sat, Jan 17, 2009 at 9:04 PM, Thomas Garner wrote:
>> Are you looking for something like:
>>
>> kstat -c disk sd:::
>>
>> Someone can correct me if I'm wrong, but I think the documentation for
>> the above should be at:
>>
>> http://src.opensolaris.org/source/xref/zfs-cry
On Sun, Jan 18, 2009 at 5:39 PM, Brad wrote:
> Well if I do fsstat mountpoint on all the filesystems in the ZFS pool, then I
> guess my aggregate number for read and write bandwidth should equal the
> aggregate numbers for the pool? Yes?
>
> The downside is that fsstat has the same granularity i
Well if I do fsstat mountpoint on all the filesystems in the ZFS pool, then I
guess my aggregate number for read and write bandwidth should equal the
aggregate numbers for the pool? Yes?
The downside is that fsstat has the same granularity issue as zpool iostat.
What I'd really like is nread an
On Sat, Jan 17, 2009 at 9:04 PM, Thomas Garner wrote:
> Are you looking for something like:
>
> kstat -c disk sd:::
>
> Someone can correct me if I'm wrong, but I think the documentation for
> the above should be at:
>
> http://src.opensolaris.org/source/xref/zfs-crypto/gate/usr/src/uts/common/avs
On Sun, Jan 18, 2009 at 9:21 AM, Carson Gaspar wrote:
>
> If you write your own using kstat, you can get accurate sub-second
> samples. Sadly you'll either have to use the amazingly crappy Sun perl
> or write it in C, as Sun hasn't yet managed to release source for the
> kstat perl module (unless
Richard Elling wrote:
...
> Most folks who want performance data collection all day long will
> enable accounting and use sar. sar also uses kstats. Or you can
> write your own scripts. Or there are a number of third party tools
> which will collect long-term stats and provide nice reports or
>
Brad wrote:
> I'd like to track a server's ZFS pool I/O throughput over time. What's a good
> data source to use for this? I like zpool iostat for this, but if I poll at
> two points in time I would get a number since boot (e.g. 1.2M) and a current
> number (e.g. 1.3K). If I use the current numb
-
From: "Thomas Garner"
To: "Brad"
Cc:
Sent: Saturday, January 17, 2009 4:04 PM
Subject: Re: [zfs-discuss] Aggregate Pool I/O
> Are you looking for something like:
>
> kstat -c disk sd:::
>
> Someone can correct me if I'm wrong, but I think the do
Are you looking for something like:
kstat -c disk sd:::
Someone can correct me if I'm wrong, but I think the documentation for
the above should be at:
http://src.opensolaris.org/source/xref/zfs-crypto/gate/usr/src/uts/common/avs/ns/sdbc/cache_kstats_readme.txt
I'm not sure about the file i/o vs
Brad,
> I'd like to track a server's ZFS pool I/O throughput over time.
> What's a good data source to use for this? I like zpool iostat for
> this, but if I poll at two points in time I would get a number since
> boot (e.g. 1.2M) and a current number (e.g. 1.3K). If I use the
> current nu
I'd like to track a server's ZFS pool I/O throughput over time. What's a good
data source to use for this? I like zpool iostat for this, but if I poll at two
points in time I would get a number since boot (e.g. 1.2M) and a current number
(e.g. 1.3K). If I use the current number then I've lost da
14 matches
Mail list logo