Following is the output of "zpool iostat -v". My question is regarding the
datapool row and the raidz2 row statistics. The datapool row statistic "write
bandwidth" is 381 which I assume takes into account all the disks - although it
doesn't look like it's an average. The raidz2 row static "write
On Mar 10, 2010, at 4:18 PM, Chris Banal wrote:
> What is the best way to tell if your bound by the number of individual
> operations per second / random io?
If no other resource is the bottleneck :-)
> "zpool iostat" has an "operations" column but this doesn't really tell me if
> my disks are
What is the best way to tell if your bound by the number of individual
operations per second / random io? "zpool iostat" has an "operations" column
but this doesn't really tell me if my disks are saturated. Traditional
"iostat" doesn't seem to be the greatest place to look when utilizing zfs.
Than
I finally managed to resolve this. I received some useful info from Richard
Elling (without List CC):
>> (ME) However I sill think, also the plain IDE driver needs a timeout to
>> hande disk failures, cause cables etc can fail.
>(Richard) Yes, this is a little bit odd. The sd driver should be
Ok, after browsing I found that the sata disks are not shown via cfgadm.
I found http://opensolaris.org/jive/message.jspa?messageID=287791&tstart=0
which states that you have to set the mode to "AHCI" to enable hot-plug etc.
However I sill think, also the plain IDE driver needs a timeout to han
Ok,
I now waited 30 minutes - still hung. After that I pulled the SATA cable to the
L2ARC device also - still no success (I waited 10 minutes).
After 10 minutes I put the L2ARC device back (SATA + Power)
20 seconds after that the system continues to run.
dmesg shows:
Jan 8 15:41:57 nexe
Hello,
today I wanted to test that the failure of the L2ARC device is not crucial to
the pool. I added a Intel X25-M Postville (160GB) as cache device to a 54 disk
mittor pool. Then I startet a SYNC iozone on the pool:
iozone -ec -r 32k -s 2048m -l 2 -i 0 -i 2 -o
Pool:
pool
mirror-0
SUMMARY :
I attach a new disk device to an existing mirror set.
zpool iostat poolname 5 does not report write bandwidth data
zpool iostat -v poolname 5 reports read and write data.
also seen, sometimes the output for bandwidth is non-zero but
has no units [ B, KB, MB, etc ].
Neil,
Thanks.
That makes sense. May be man page for zpool can say that it is a rate as iostat
man page does. I think reads are from the zpool iostat command itself. zpool
iostat doesn't capture that.
Thanks
--
This message posted from opensolaris.org
__
On 06/20/09 11:14, tester wrote:
Hi,
Does anyone know the difference between zpool iostat and iostat?
dd if=/dev/zero of=/test/test1/trash count=1 bs=1024k;sync
pool only shows 236K IO and 13 write ops. whereas iostat shows a correctly meg
of activity.
The zfs numbers are per second as we
Hi,
Does anyone know the difference between zpool iostat and iostat?
dd if=/dev/zero of=/test/test1/trash count=1 bs=1024k;sync
pool only shows 236K IO and 13 write ops. whereas iostat shows a correctly meg
of activity.
zpool iostat -v test 5
capac
Zpool iostat 5 reports:
rpool115G 349G 91 0 45.7K 0
rpool115G 349G 90 0 45.5K 0
rpool115G 349G 89 0 44.6K 0
rpool115G 349G 93 0 47.9K 0
rpool115G 349G 90 0 45.0K 0
rpool
On Thu, Jun 19, 2008 at 10:06:19AM +0100, Robert Milkowski wrote:
> Hello Brian,
>
> BH> A three-way mirror and three disks in a double parity array are going to
> get you
> BH> the same usable space. They are going to get you the same level of
> redundancy.
> BH> The only difference is that th
On Wed, 18 Jun 2008, Anil Jangity wrote:
> I plan to have 3 disks and am debating what I should do with them, if I
> should do a
> raidz (single or double parity) or just a mirror.
>
> With the # of reads below, I don't see any reason why I should consider
> that. I would like to
> proceed with do
Hello Brian,
Thursday, June 19, 2008, 3:44:01 AM, you wrote:
BH> A three-way mirror and three disks in a double parity array are going to
get you
BH> the same usable space. They are going to get you the same level of
redundancy.
BH> The only difference is that the RAIDZ2 is going to consume a
I was using a 5 minute interval.
I did another test with 1 second interval:
data1 41.6G 5.65G 0 0 63.4K 0
data2 58.2G 9.81G 0447 0 2.31M
So, the 63K read bandwidth doesn't show any read operations still. Is
that still rounding?
What exactly is an op
A three-way mirror and three disks in a double parity array are going to get you
the same usable space. They are going to get you the same level of redundancy.
The only difference is that the RAIDZ2 is going to consume a lot more CPU cycles
calculating parity for no good cause.
In this case,
G'Day Anil,
On Wed, Jun 18, 2008 at 07:37:38PM -0700, Anil Jangity wrote:
> Why is it that the read operations are 0 but the read bandwidth is >0?
> What is iostat
> [not] accounting for? Is it the metadata reads? (Is it possible to
> determine what kind of metadata
> reads these are?
This coul
On Wed, Jun 18, 2008 at 07:37:38PM -0700, Anil Jangity wrote:
> Why is it that the read operations are 0 but the read bandwidth is >0?
> What is iostat
> [not] accounting for? Is it the metadata reads? (Is it possible to
> determine what kind of metadata
> reads these are?
That question I'll lea
Why is it that the read operations are 0 but the read bandwidth is >0?
What is iostat
[not] accounting for? Is it the metadata reads? (Is it possible to
determine what kind of metadata
reads these are?
I plan to have 3 disks and am debating what I should do with them, if I
should do a
raidz (si
Having recently upgraded from snv_57 to snv_73 I've noticed some strange
behaviour with the -v option to zpool iostat.
Without the -v option on an idle pool things look reasonable.
bash-3.00# zpool iostat 1
capacity operationsbandwidth
pool used avail read writ
I really need to take a longer look here.
/*
* zpool iostat [-v] [pool] ... [interval [count]]
*
* -v Display statistics for individual vdevs
*
* This command can be tricky because we want to be able to deal with pool
.
.
.
I think I may need to deal with a raw option here ?
I'm doing a putback onto my local workstation, watching the disk
activity with "zpool iostat", when I start to notice something
quite strange...
zpool iostat 1
capacity operationsbandwidth
pool used avail read write read write
-- - - -
On Mon, Sep 18, 2006 at 11:05:07AM -0400, Krzys wrote:
> Hello folks, is there any way to get timestamps when doing "zpool iostat 1"
> for example?
>
> Well I did run zpool iostat 60 starting last night and I got some loads
> indicating along the way but without a time stamps I cant figure out a
Hello folks, is there any way to get timestamps when doing "zpool iostat 1" for
example?
Well I did run zpool iostat 60 starting last night and I got some loads
indicating along the way but without a time stamps I cant figure out at around
what time did they happen.
Thanks.
Chris
_
Hi Ricardo,
Nevermind my previous email.
I think what happened is that a new set of Solaris Express man pages
was downloaded over the weekened for the SX 8/06 and this breaks the
links on the opensolaris...zfs page.
Noel, thanks for fixing them. I'll set a reminder to fix these for
every Solari
thanks for the heads up. I've fixed them to point to the right documents. NoelOn Aug 20, 2006, at 11:38 AM, Ricardo Correia wrote:By the way, the manpage links in http://www.opensolaris.org/os/community/zfs/docs/ are not correct, they are linked to wrong documents.
Hi,
How are the statistics in 'zpool iostat -v' computed? Is this an
x-minute-average? I noticed that if there's no I/O for a while, the numbers
keep decreasing and the zpool manpage doesn't say anything about this.
By the way, the manpage links in
http://www.opensolaris.org/os/community/zfs/d
28 matches
Mail list logo