[zfs-discuss] zpool iostat question

2010-05-28 Thread melbogia
Following is the output of "zpool iostat -v". My question is regarding the datapool row and the raidz2 row statistics. The datapool row statistic "write bandwidth" is 381 which I assume takes into account all the disks - although it doesn't look like it's an average. The raidz2 row static "write

Re: [zfs-discuss] zpool iostat / how to tell if your iop bound

2010-03-11 Thread Richard Elling
On Mar 10, 2010, at 4:18 PM, Chris Banal wrote: > What is the best way to tell if your bound by the number of individual > operations per second / random io? If no other resource is the bottleneck :-) > "zpool iostat" has an "operations" column but this doesn't really tell me if > my disks are

[zfs-discuss] zpool iostat / how to tell if your iop bound

2010-03-10 Thread Chris Banal
What is the best way to tell if your bound by the number of individual operations per second / random io? "zpool iostat" has an "operations" column but this doesn't really tell me if my disks are saturated. Traditional "iostat" doesn't seem to be the greatest place to look when utilizing zfs. Than

Re: [zfs-discuss] zpool iostat -v hangs on L2ARC failure (SATA, 160 GB Postville)

2010-01-09 Thread Lutz Schumann
I finally managed to resolve this. I received some useful info from Richard Elling (without List CC): >> (ME) However I sill think, also the plain IDE driver needs a timeout to >> hande disk failures, cause cables etc can fail. >(Richard) Yes, this is a little bit odd. The sd driver should be

Re: [zfs-discuss] zpool iostat -v hangs on L2ARC failure (SATA, 160 GB Postville)

2010-01-08 Thread Lutz Schumann
Ok, after browsing I found that the sata disks are not shown via cfgadm. I found http://opensolaris.org/jive/message.jspa?messageID=287791&tstart=0 which states that you have to set the mode to "AHCI" to enable hot-plug etc. However I sill think, also the plain IDE driver needs a timeout to han

Re: [zfs-discuss] zpool iostat -v hangs on L2ARC failure (SATA, 160 GB Postville)

2010-01-08 Thread Lutz Schumann
Ok, I now waited 30 minutes - still hung. After that I pulled the SATA cable to the L2ARC device also - still no success (I waited 10 minutes). After 10 minutes I put the L2ARC device back (SATA + Power) 20 seconds after that the system continues to run. dmesg shows: Jan 8 15:41:57 nexe

[zfs-discuss] zpool iostat -v hangs on L2ARC failure (SATA, 160 GB Postville)

2010-01-08 Thread Lutz Schumann
Hello, today I wanted to test that the failure of the L2ARC device is not crucial to the pool. I added a Intel X25-M Postville (160GB) as cache device to a 54 disk mittor pool. Then I startet a SYNC iozone on the pool: iozone -ec -r 32k -s 2048m -l 2 -i 0 -i 2 -o Pool: pool mirror-0

[zfs-discuss] zpool iostat reports seem odd. bug ?

2009-08-10 Thread Dennis Clarke
SUMMARY : I attach a new disk device to an existing mirror set. zpool iostat poolname 5 does not report write bandwidth data zpool iostat -v poolname 5 reports read and write data. also seen, sometimes the output for bandwidth is non-zero but has no units [ B, KB, MB, etc ].

Re: [zfs-discuss] zpool iostat and iostat discrepancy

2009-06-20 Thread tester
Neil, Thanks. That makes sense. May be man page for zpool can say that it is a rate as iostat man page does. I think reads are from the zpool iostat command itself. zpool iostat doesn't capture that. Thanks -- This message posted from opensolaris.org __

Re: [zfs-discuss] zpool iostat and iostat discrepancy

2009-06-20 Thread Neil Perrin
On 06/20/09 11:14, tester wrote: Hi, Does anyone know the difference between zpool iostat and iostat? dd if=/dev/zero of=/test/test1/trash count=1 bs=1024k;sync pool only shows 236K IO and 13 write ops. whereas iostat shows a correctly meg of activity. The zfs numbers are per second as we

[zfs-discuss] zpool iostat and iostat discrepancy

2009-06-20 Thread tester
Hi, Does anyone know the difference between zpool iostat and iostat? dd if=/dev/zero of=/test/test1/trash count=1 bs=1024k;sync pool only shows 236K IO and 13 write ops. whereas iostat shows a correctly meg of activity. zpool iostat -v test 5 capac

[zfs-discuss] zpool iostat not reporting corect info on snv_107

2009-02-18 Thread Zoltan Farkas
Zpool iostat 5 reports: rpool115G 349G 91 0 45.7K 0 rpool115G 349G 90 0 45.5K 0 rpool115G 349G 89 0 44.6K 0 rpool115G 349G 93 0 47.9K 0 rpool115G 349G 90 0 45.0K 0 rpool

Re: [zfs-discuss] zpool iostat

2008-06-23 Thread Brian Hechinger
On Thu, Jun 19, 2008 at 10:06:19AM +0100, Robert Milkowski wrote: > Hello Brian, > > BH> A three-way mirror and three disks in a double parity array are going to > get you > BH> the same usable space. They are going to get you the same level of > redundancy. > BH> The only difference is that th

Re: [zfs-discuss] zpool iostat

2008-06-19 Thread Bob Friesenhahn
On Wed, 18 Jun 2008, Anil Jangity wrote: > I plan to have 3 disks and am debating what I should do with them, if I > should do a > raidz (single or double parity) or just a mirror. > > With the # of reads below, I don't see any reason why I should consider > that. I would like to > proceed with do

Re: [zfs-discuss] zpool iostat

2008-06-19 Thread Robert Milkowski
Hello Brian, Thursday, June 19, 2008, 3:44:01 AM, you wrote: BH> A three-way mirror and three disks in a double parity array are going to get you BH> the same usable space. They are going to get you the same level of redundancy. BH> The only difference is that the RAIDZ2 is going to consume a

Re: [zfs-discuss] zpool iostat

2008-06-18 Thread Anil Jangity
I was using a 5 minute interval. I did another test with 1 second interval: data1 41.6G 5.65G 0 0 63.4K 0 data2 58.2G 9.81G 0447 0 2.31M So, the 63K read bandwidth doesn't show any read operations still. Is that still rounding? What exactly is an op

Re: [zfs-discuss] zpool iostat

2008-06-18 Thread Anil Jangity
A three-way mirror and three disks in a double parity array are going to get you the same usable space. They are going to get you the same level of redundancy. The only difference is that the RAIDZ2 is going to consume a lot more CPU cycles calculating parity for no good cause. In this case,

Re: [zfs-discuss] zpool iostat

2008-06-18 Thread Brendan Gregg - Sun Microsystems
G'Day Anil, On Wed, Jun 18, 2008 at 07:37:38PM -0700, Anil Jangity wrote: > Why is it that the read operations are 0 but the read bandwidth is >0? > What is iostat > [not] accounting for? Is it the metadata reads? (Is it possible to > determine what kind of metadata > reads these are? This coul

Re: [zfs-discuss] zpool iostat

2008-06-18 Thread Brian Hechinger
On Wed, Jun 18, 2008 at 07:37:38PM -0700, Anil Jangity wrote: > Why is it that the read operations are 0 but the read bandwidth is >0? > What is iostat > [not] accounting for? Is it the metadata reads? (Is it possible to > determine what kind of metadata > reads these are? That question I'll lea

[zfs-discuss] zpool iostat

2008-06-18 Thread Anil Jangity
Why is it that the read operations are 0 but the read bandwidth is >0? What is iostat [not] accounting for? Is it the metadata reads? (Is it possible to determine what kind of metadata reads these are? I plan to have 3 disks and am debating what I should do with them, if I should do a raidz (si

[zfs-discuss] zpool iostat -v oddity with snv_73

2007-10-08 Thread Alan Romeril
Having recently upgraded from snv_57 to snv_73 I've noticed some strange behaviour with the -v option to zpool iostat. Without the -v option on an idle pool things look reasonable. bash-3.00# zpool iostat 1 capacity operationsbandwidth pool used avail read writ

[zfs-discuss] zpool iostat : This command can be tricky ...

2007-04-15 Thread Dennis Clarke
I really need to take a longer look here. /* * zpool iostat [-v] [pool] ... [interval [count]] * * -v Display statistics for individual vdevs * * This command can be tricky because we want to be able to deal with pool . . . I think I may need to deal with a raw option here ?

[zfs-discuss] zpool iostat - 0 read operations?

2006-10-25 Thread Darren . Reed
I'm doing a putback onto my local workstation, watching the disk activity with "zpool iostat", when I start to notice something quite strange... zpool iostat 1 capacity operationsbandwidth pool used avail read write read write -- - - -

Re: [zfs-discuss] zpool iostat

2006-09-18 Thread przemolicc
On Mon, Sep 18, 2006 at 11:05:07AM -0400, Krzys wrote: > Hello folks, is there any way to get timestamps when doing "zpool iostat 1" > for example? > > Well I did run zpool iostat 60 starting last night and I got some loads > indicating along the way but without a time stamps I cant figure out a

[zfs-discuss] zpool iostat

2006-09-18 Thread Krzys
Hello folks, is there any way to get timestamps when doing "zpool iostat 1" for example? Well I did run zpool iostat 60 starting last night and I got some loads indicating along the way but without a time stamps I cant figure out at around what time did they happen. Thanks. Chris _

Re: [zfs-discuss] zpool iostat, scrubbing increases used disk space

2006-08-21 Thread Cindy Swearingen
Hi Ricardo, Nevermind my previous email. I think what happened is that a new set of Solaris Express man pages was downloaded over the weekened for the SX 8/06 and this breaks the links on the opensolaris...zfs page. Noel, thanks for fixing them. I'll set a reminder to fix these for every Solari

Re: [zfs-discuss] zpool iostat, scrubbing increases used disk space

2006-08-20 Thread Noel Dellofano
thanks for the heads up.  I've fixed them to point to the right documents. NoelOn Aug 20, 2006, at 11:38 AM, Ricardo Correia wrote:By the way, the manpage links in  http://www.opensolaris.org/os/community/zfs/docs/ are not correct, they are  linked to wrong documents.

[zfs-discuss] zpool iostat, scrubbing increases used disk space

2006-08-20 Thread Ricardo Correia
Hi, How are the statistics in 'zpool iostat -v' computed? Is this an x-minute-average? I noticed that if there's no I/O for a while, the numbers keep decreasing and the zpool manpage doesn't say anything about this. By the way, the manpage links in http://www.opensolaris.org/os/community/zfs/d