Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > I can tell you I've had terrible everything rates when I used dedup. So, the above comment isn't fair, really. The truth is here: http://mail.opensolaris.org/pipermail/zf

Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Jim Klimov
On 2013-02-04 17:10, Karl Wagner wrote: OK then, I guess my next question would be what's the best way to "undedupe" the data I have? Would it work for me to zfs send/receive on the same pool (with dedup off), deleting the old datasets once they have been 'copied'? I think I remember reading som

Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Koopmann, Jan-Peter
Hi, OK then, I guess my next question would be what's the best way to "undedupe" the data I have? Would it work for me to zfs send/receive on the same pool (with dedup off), deleting the old datasets once they have been 'copied'? yes. Worked for my. I think I remember reading somewhere that

Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Karl Wagner
OK then, I guess my next question would be what's the best way to "undedupe" the data I have? Would it work for me to zfs send/receive on the same pool (with dedup off), deleting the old datasets once they have been 'copied'? I think I remember reading somewhere that the DDT never shrinks, so

Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Jim Klimov
On 2013-02-04 15:52, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: I noticed that sometimes I had terrible rates with < 10MB/sec. Then later it rose up to < 70MB/sec. Are you talking about scrub rates for the complete scrub? Because if you sit there and watch it, from minute

Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Koopmann, Jan-Peter
Hi Edward, From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Koopmann, Jan-Peter all I can tell you is that I've had terrible scrub rates when I used dedup. I can tell

Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Koopmann, Jan-Peter > > all I can tell you is that I've had terrible scrub rates when I used dedup. I can tell you I've had terrible everything rates when I used dedup. > The > DDT was a bi

Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Koopmann, Jan-Peter
Hi Karl, Recently, however, it has started taking over 20hours to complete. Not much has happened to it in that time: A few extra files added, maybe a couple of deletions, but not a huge amount. I am finding it difficult to understand why performance would have dropped so dramatically. FYI th

[zfs-discuss] Scrub performance

2013-02-04 Thread Karl Wagner
Hi all I have had a ZFS file server for a while now. I recently upgraded it, giving it 16GB RAM and an SSD for L2ARC. This allowed me to evaluate dedupe on certain datasets, which worked pretty well. The main reason for the upgrade was that something wasn't working quite right, and I was get

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-31 Thread Timothy Coalson
On Wed, Oct 31, 2012 at 6:47 PM, Matthew Ahrens wrote: > On Thu, Oct 25, 2012 at 2:25 AM, Jim Klimov wrote: > >> Hello all, >> >> I was describing how raidzN works recently, and got myself wondering: >> does zpool scrub verify all the parity sectors and the mirror halves? >> > > Yes. The ZIO_

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-31 Thread Matthew Ahrens
On Thu, Oct 25, 2012 at 2:25 AM, Jim Klimov wrote: > Hello all, > > I was describing how raidzN works recently, and got myself wondering: > does zpool scrub verify all the parity sectors and the mirror halves? > Yes. The ZIO_FLAG_SCRUB instructs the raidz or mirror vdev to read and verify all

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-29 Thread Tomas Forsman
On 28 October, 2012 - Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) sent me these 1,0K bytes: > > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > > boun...@opensolaris.org] On Behalf Of Jim Klimov > > > > I tend to agree that parity calculations likely > > are faster

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-28 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Jim Klimov > > I tend to agree that parity calculations likely > are faster (even if not all parities are simple XORs - that would > be silly for double- or triple-parity sets which may use dif

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-27 Thread Timothy Coalson
On Sat, Oct 27, 2012 at 12:35 PM, Jim Klimov wrote: > 2012-10-27 20:54, Toby Thain wrote: > >> Parity is very simple to calculate and doesn't use a lot of CPU - just >>> slightly more work than reading all the blocks: read all the stripe >>> blocks on all the drives involved in a stripe, then do

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-27 Thread Jim Klimov
2012-10-27 20:54, Toby Thain wrote: Parity is very simple to calculate and doesn't use a lot of CPU - just slightly more work than reading all the blocks: read all the stripe blocks on all the drives involved in a stripe, then do a simple XOR operation across all the data. The actual checksums a

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-27 Thread Toby Thain
On 27/10/12 11:56 AM, Ray Arachelian wrote: On 10/26/2012 04:29 AM, Karl Wagner wrote: Does it not store a separate checksum for a parity block? If so, it should not even need to recalculate the parity: assuming checksums match for all data and parity blocks, the data is good. ... Parity is

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-27 Thread Ray Arachelian
On 10/26/2012 04:29 AM, Karl Wagner wrote: > > Does it not store a separate checksum for a parity block? If so, it > should not even need to recalculate the parity: assuming checksums > match for all data and parity blocks, the data is good. > > I could understand why it would not store a checksum

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-26 Thread Jim Klimov
2012-10-26 12:29, Karl Wagner wrote: Does it not store a separate checksum for a parity block? If so, it should not even need to recalculate the parity: assuming checksums match for all data and parity blocks, the data is good. No, for the on-disk sector allocation over M disks, zfs raidzN writ

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-26 Thread Karl Wagner
Does it not store a separate checksum for a parity block? If so, it should not even need to recalculate the parity: assuming checksums match for all data and parity blocks, the data is good. I could understand why it would not store a checksum for a parity block. It is not really necessary: Pa

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-25 Thread Jim Klimov
2012-10-25 21:17, Timothy Coalson wrote: On Thu, Oct 25, 2012 at 7:35 AM, Jim Klimov mailto:jimkli...@cos.ru>> wrote: If scrubbing works the way we "logically" expect it to, it should enforce validation of such combinations for each read of each copy of a block, in order to ensure

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-25 Thread Timothy Coalson
On Thu, Oct 25, 2012 at 7:35 AM, Jim Klimov wrote: > > If scrubbing works the way we "logically" expect it to, it > should enforce validation of such combinations for each read > of each copy of a block, in order to ensure that parity sectors > are intact and can be used for data recovery if a pl

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-25 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Jim Klimov > > Logically, yes - I agree this is what we expect to be done. > However, at least with the normal ZFS reading pipeline, reads > of redundant copies and parities only kick in if the

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-25 Thread Jim Klimov
2012-10-25 15:30, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) пишет: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Karl Wagner I can only speak anecdotally, but I believe it does. Watching zpool iostat it does read all data on

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-25 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Karl Wagner > > I can only speak anecdotally, but I believe it does. > > Watching zpool iostat it does read all data on both disks in a mirrored > pair. > > Logically, it would not make sense

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-25 Thread Karl Wagner
I can only speak anecdotally, but I believe it does. Watching zpool iostat it does read all data on both disks in a mirrored pair. Logically, it would not make sense not to verify all redundant data. The point of a scrub is to ensure all data is correct. On 2012-10-25 10:25, Jim Klimov wrot

[zfs-discuss] Scrub and checksum permutations

2012-10-25 Thread Jim Klimov
Hello all, I was describing how raidzN works recently, and got myself wondering: does zpool scrub verify all the parity sectors and the mirror halves? That is, IIRC, the scrub should try to read all allocated blocks and if they are read in OK - fine; if not - fix in-place with redundant data or

Re: [zfs-discuss] Scrub works in parallel?

2012-06-12 Thread Richard Elling
On Jun 11, 2012, at 6:05 AM, Jim Klimov wrote: > 2012-06-11 5:37, Edward Ned Harvey wrote: >>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >>> boun...@opensolaris.org] On Behalf Of Kalle Anka >>> >>> Assume we have 100 disks in one zpool. Assume it takes 5 hours to scrub >> one

Re: [zfs-discuss] Scrub works in parallel?

2012-06-12 Thread Jim Klimov
2012-06-12 16:45, Roch Bourbonnais wrote: The process should be scalable. Scrub all of the data on one disk using one disk worth of IOPS Scrub all of the data on N disks using N disk's worth of IOPS. THat will take ~ the same total time. IF the uplink or processing power or some other bottl

Re: [zfs-discuss] Scrub works in parallel?

2012-06-12 Thread Roch Bourbonnais
The process should be scalable. Scrub all of the data on one disk using one disk worth of IOPS Scrub all of the data on N disks using N disk's worth of IOPS. THat will take ~ the same total time. -r Le 12 juin 2012 à 08:28, Jim Klimov a écrit : > 2012-06-12 16:20, Roch Bourbonnais wrote: >>

Re: [zfs-discuss] Scrub works in parallel?

2012-06-12 Thread Jim Klimov
2012-06-12 16:20, Roch Bourbonnais wrote: Scrubs are run at very low priority and yield very quickly in the presence of other work. So I really would not expect to see scrub create any impact on an other type of storage activity. Resilvering will more aggressively push forward on what is has t

Re: [zfs-discuss] Scrub works in parallel?

2012-06-12 Thread Roch Bourbonnais
Scrubs are run at very low priority and yield very quickly in the presence of other work. So I really would not expect to see scrub create any impact on an other type of storage activity. Resilvering will more aggressively push forward on what is has to do, but resilvering does not need to rea

Re: [zfs-discuss] Scrub works in parallel?

2012-06-11 Thread Jim Klimov
2012-06-11 5:37, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Kalle Anka Assume we have 100 disks in one zpool. Assume it takes 5 hours to scrub one disk. If I scrub the zpool, how long time will it take? Will it

Re: [zfs-discuss] Scrub works in parallel?

2012-06-10 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Kalle Anka > > Assume we have 100 disks in one zpool. Assume it takes 5 hours to scrub one > disk. If I scrub the zpool, how long time will it take? > > Will it scrub one disk at a time, so it

Re: [zfs-discuss] Scrub works in parallel?

2012-06-10 Thread Tomas Forsman
On 10 June, 2012 - Kalle Anka sent me these 1,5K bytes: > Assume we have 100 disks in one zpool. Assume it takes 5 hours to > scrub one disk. If I scrub the zpool, how long time will it take? > > > Will it scrub one disk at a time, so it will take 500 hours, i.e. in > sequence, just serial? Or

[zfs-discuss] Scrub works in parallel?

2012-06-10 Thread Kalle Anka
Assume we have 100 disks in one zpool. Assume it takes 5 hours to scrub one disk. If I scrub the zpool, how long time will it take? Will it scrub one disk at a time, so it will take 500 hours, i.e. in sequence, just serial? Or is it possible to run the scrub in parallel, so it takes 5h no mat

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checksum errors now...

2011-12-18 Thread Jim Klimov
2011-12-17 21:59, Steve Gonczi wrote: Coincidentally, I am pretty sure entry 0 of these meta dnode objects is never used, so the block with the checksum error does never comes into play. Steve I wonder if this is true indeed - seems so, because the pool seems to work reardless of the seemingly

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-17 Thread Jim Klimov
Another update to my post: it took a week to try running some ZDB walks on my pool, but they coredumped after a while. However, I've also noticed some clues in my FMADM outputs dating from 'zpool scrub' attempts. There are several sets (one set per scrub) of similar error reports, differing only

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-08 Thread Nigel W
On Mon, Dec 5, 2011 at 17:46, Jim Klimov wrote: > So, in contrast with Nigel's optimistic theory that > metadata is anyway extra-redundant and should be > easily fixable, it seems that I do still have the > problem. It does not show itself in practice as of > yet, but is found by scrub ;) Hmm. In

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-05 Thread Jim Klimov
Well, I have an intermediate data point. One scrub run completed without finding any newer errors (beside one at the pool-level and two and the raidz2-level). "Zpool clear" alone did not fix it, meaning that the pool:metadata:<0x0> was still reported as problematic, but a second attempt at "zpool

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-02 Thread Nigel W
On Fri, Dec 2, 2011 at 02:58, Jim Klimov wrote: > My question still stands: is it possible to recover > from this error or somehow safely ignore it? ;) > I mean, without backing up data and recreating the > pool? > > If the problem is in metadata but presumably the > pool still works, then this pa

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-02 Thread Jim Klimov
2011-12-02 18:25, Steve Gonczi пишет: Hi Jim, Try to run a "zdb -b poolname" .. This should report any leaked or double allocated blocks. (It may or may not run, it tends to run out of memory and crash on large datasets) I would be curious what zdb reports, and whether you are able to run it

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-02 Thread Jim Klimov
An intermediate update to my recent post: 2011-11-30 21:01, Jim Klimov wrote: Hello experts, I've finally upgraded my troublesome oi-148a home storage box to oi-151a about a week ago (using pkg update method from the wiki page - i'm not certain if that repository is fixed at release version o

[zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-11-30 Thread Jim Klimov
Hello experts, I've finally upgraded my troublesome oi-148a home storage box to oi-151a about a week ago (using pkg update method from the wiki page - i'm not certain if that repository is fixed at release version or is a sliding "current" one). After the OS upgrade i scrubbed my main pool - 6

Re: [zfs-discuss] Scrub error and object numbers

2011-10-17 Thread Shain Miley
Here is the out put from: zdb -vvv smbpool/glusterfs 0x621b67 Dataset smbpool/glusterfs [ZPL], ID 270, cr_txg 1034346, 20.1T, 4139680 objects, rootbp DVA[0]=<5:5e21000:600> DVA[1]=<0:5621000:600> [L0 DMU objset] fletcher4 lzjb LE contiguous unique double size=400L/200P birth=1887643L/1

[zfs-discuss] Scrub error and object numbers

2011-10-12 Thread Shain Miley
Hello all, I am using Opensolaris version snv_101b and after some recent issues with a faulty raid card I am unable to finish an entire 'zpool scrub' to completion. While running the scub I receive the following: errors: Permanent errors have been detected in the following files: smbpoo

Re: [zfs-discuss] scrub doesn't finally finish?

2010-10-06 Thread Stephan Budach
Seems like it's really the case, that scrub doesn't take traffic that goes onto the zpool while it's scrubbing away. After some more time, the scrub finished and everything looks good so far. Thanks, budy -- This message posted from opensolaris.org __

Re: [zfs-discuss] scrub doesn't finally finish?

2010-10-06 Thread Stephan Budach
Yes - that may well be. There was data going on to the device while scrub has been running. Especially large zfs receives had been going on. I'd be odd if that was the case, though. Cheers, budy -- This message posted from opensolaris.org ___ zfs-dis

Re: [zfs-discuss] scrub doesn't finally finish?

2010-10-06 Thread Marty Scholes
Have you had a lot of activity since the scrub started? I have noticed what appears to be extra I/O at the end of a scrub when activity took place during the scrub. It's as if the scrub estimator does not take the extra activity into account. -- This message posted from opensolaris.org ___

[zfs-discuss] scrub doesn't finally finish?

2010-10-06 Thread Stephan Budach
Hi all, I have issued a scrub on a pool, that consists of two independant FC raids. The scrub has been running for approx. 25 hrs and then showed 100%, but there's still an incredible traffic on one of the FC raids going on, plus zpool statuv -v reports that scrub is still running: zpool stat

Re: [zfs-discuss] scrub: resilver in progress for 0h38m, 0.00% done, 1131207h51m to go

2010-09-23 Thread LIC mesh
On Wed, Sep 22, 2010 at 8:13 PM, Richard Elling wrote: > On Sep 22, 2010, at 1:46 PM, LIC mesh wrote: > > Something else is probably causing the slow I/O. What is the output of > "iostat -en" ? The best answer is "all balls" (balls == zeros) > > Found a number of LUNs with errors this way, lo

Re: [zfs-discuss] scrub: resilver in progress for 0h38m, 0.00% done, 1131207h51m to go

2010-09-22 Thread Richard Elling
On Sep 22, 2010, at 1:46 PM, LIC mesh wrote: > What options are there to turn off or reduce the priority of a resilver? > > This is on a 400TB iSCSI based zpool (8 LUNs per raidz2 vdev, 4 LUNs per > shelf, 6 drives per LUN - 16 shelves total) - my client has gotten to the > point that they just

[zfs-discuss] scrub: resilver in progress for 0h38m, 0.00% done, 1131207h51m to go

2010-09-22 Thread LIC mesh
What options are there to turn off or reduce the priority of a resilver? This is on a 400TB iSCSI based zpool (8 LUNs per raidz2 vdev, 4 LUNs per shelf, 6 drives per LUN - 16 shelves total) - my client has gotten to the point that they just want to get their data off, but this resilver won't stop.

Re: [zfs-discuss] Scrub extremely slow?

2010-07-10 Thread Hernan F
Too bad then, I can't afford a couple of SSDs for this machine as it's just a home file server. I'm surprised about the scrub speed though... This used to be a 4x500GB machine, to which I replaced the disks one by one. Resilver (about 80% full) took about 6 hours to complete - now it's twice the

Re: [zfs-discuss] Scrub extremely slow?

2010-07-10 Thread Roy Sigurd Karlsbakk
- Original Message - > I tested with Bonnie++ and it reports about 200MB/s. > > The pool version is 22 (SunOS solaris 5.11 snv_134 i86pc i386 i86pc > Solaris) > > I let the scrub run for hours and it was still at around 10MB/s. I > tried to access an iSCSI target on that pool and it was r

Re: [zfs-discuss] Scrub extremely slow?

2010-07-10 Thread Hernan F
I tested with Bonnie++ and it reports about 200MB/s. The pool version is 22 (SunOS solaris 5.11 snv_134 i86pc i386 i86pc Solaris) I let the scrub run for hours and it was still at around 10MB/s. I tried to access an iSCSI target on that pool and it was really really slow (about 600KB/s!) while

Re: [zfs-discuss] Scrub extremely slow?

2010-07-10 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Hernan F > Subject: [zfs-discuss] Scrub extremely slow? Perhaps this is related? http://hub.opensolaris.org/bin/view/Community+Group+zfs/11 Zpool version 11, introduced "

[zfs-discuss] Scrub extremely slow?

2010-07-09 Thread Hernan F
Hello, I'm trying to figure out why I'm getting about 10MB/s scrubs, on a pool where I can easily get 100MB/s. It's 4x 1TB SATA2 (nv_sata), raidz. Athlon64 with 8GB RAM. Here's the output while I "cat" an 8GB file to /dev/null r...@solaris:~# zpool iostat 20 capacity operatio

[zfs-discuss] Scrub time dramaticy increased

2010-06-20 Thread bonso
Hello all, I recently noticed that my storage pool has started to take a lot of time finishing a scrub, approximately the final 10% takes 30m to finish while the previous 90 are done is as many minutes. The 'zpool status' command does however not change its estimated remaining time. Currently 6

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread George Wilson
Richard Elling wrote: On Jun 14, 2010, at 2:12 PM, Roy Sigurd Karlsbakk wrote: Hi all It seems zfs scrub is taking a big bit out of I/O when running. During a scrub, sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some L2ARC helps this, but still, the problem remains

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread Richard Elling
On Jun 14, 2010, at 2:12 PM, Roy Sigurd Karlsbakk wrote: > Hi all > > It seems zfs scrub is taking a big bit out of I/O when running. During a > scrub, sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG > and some L2ARC helps this, but still, the problem remains in that the scr

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread Robert Milkowski
On 14/06/2010 22:12, Roy Sigurd Karlsbakk wrote: Hi all It seems zfs scrub is taking a big bit out of I/O when running. During a scrub, sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some L2ARC helps this, but still, the problem remains in that the scrub is given ful

[zfs-discuss] Scrub issues

2010-06-14 Thread Roy Sigurd Karlsbakk
Hi all It seems zfs scrub is taking a big bit out of I/O when running. During a scrub, sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some L2ARC helps this, but still, the problem remains in that the scrub is given full priority. Is this problem known to the developer

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Ian Collins
On 03/18/10 11:09 AM, Bill Sommerfeld wrote: On 03/17/10 14:03, Ian Collins wrote: I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% done, but not complete: scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go If blocks that have already been visited are freed a

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Giovanni Tirloni
On Wed, Mar 17, 2010 at 7:09 PM, Bill Sommerfeld wrote: > On 03/17/10 14:03, Ian Collins wrote: > >> I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% >> done, but not complete: >> >> scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go >> > > Don't panic. If "zpo

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Ian Collins
On 03/18/10 11:09 AM, Bill Sommerfeld wrote: On 03/17/10 14:03, Ian Collins wrote: I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% done, but not complete: scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go Don't panic. If "zpool iostat" still shows active

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Bill Sommerfeld
On 03/17/10 14:03, Ian Collins wrote: I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% done, but not complete: scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go Don't panic. If "zpool iostat" still shows active reads from all disks in the pool, just step

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Freddie Cash
On Wed, Mar 17, 2010 at 2:03 PM, Ian Collins wrote: > I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% > done, but not complete: > > scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go > > Any ideas? I've had that happen on FreeBSD 7-STABLE (post 7.2 release) us

[zfs-discuss] Scrub not completing?

2010-03-17 Thread Ian Collins
I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% done, but not complete: scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go Any ideas? -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.ope

Re: [zfs-discuss] scrub in 132

2010-02-22 Thread Cindy Swearingen
Hi Dirk, I'm not seeing anything specific to hanging scrubs on b 132 and I can't reproduce it. Any hardware changes or failures directly before the scrub? You can rule out any hardware issues by checking fmdump -eV, iostat -En, or /var/adm/messages output. Thanks, Cindy On 02/20/10 12:56, dir

[zfs-discuss] scrub in 132

2010-02-20 Thread dirk schelfhout
uname -a SunOS 5.11 snv_132 i86pc i386 i86pc Solaris scrub made my system unresponsive. could not login with ssh. had to do a hard reboot. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.

[zfs-discuss] Scrub slow (again) after dedupe

2009-12-29 Thread Michael Herf
I have a 4-disk RAIDZ, and I reduced the time to scrub it from 80 hours to about 14 by reducing the number of snapshots, adding RAM, turning off atime, compression, and some other tweaks. This week (after replaying a large volume with dedup=on) it's back up, way up. I replayed a 700G filesystem to

Re: [zfs-discuss] scrub differs in execute time?

2009-11-15 Thread Brandon High
On Sun, Nov 15, 2009 at 10:39 AM, Orvar Korvar wrote: > Yes that might be the cause. Thanks for identifying that. So I would gain > bandwidth if I tucked some drives on the mobo SATA and some drives on the AOC > card, instead of having all drives on the AOC card. Yup! The ICH10 is connected at

Re: [zfs-discuss] scrub differs in execute time?

2009-11-15 Thread Orvar Korvar
Yes that might be the cause. Thanks for identifying that. So I would gain bandwidth if I tucked some drives on the mobo SATA and some drives on the AOC card, instead of having all drives on the AOC card. -- This message posted from opensolaris.org ___

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan
> The ICH10 has a 32-bit/33MHz PCI bus which provides 133MB/s at half duplex. you are correct, I thought ICH10 used a 66Mhz bus, when infact its 33Mhz. The AOC card works fine in a PCI-X 64Bit/133Mhz slot good for 1,067 MB/s even if the motherboard uses a PXH chip via 8 lane PCIE.

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Brandon High
On Sat, Nov 14, 2009 at 7:00 AM, Orvar Korvar wrote: > I use Intel Q9450 + P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI > slot, not PCI-x. About the HBA, I have no idea. It sounds like you're saturating the PCI port. The ICH10 has a 32-bit/33MHz PCI bus which provides 133MB/s at half du

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Eric D. Mudama
On Sat, Nov 14 at 11:23, Rob Logan wrote: P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot I'm not sure how many "half your disks" are or how your vdevs are configured, but the ICH10 has 6 sata ports at 300MB and one PCI port at 266MB (that's also shared with the IT8213 IDE chip) s

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan
> P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot I'm not sure how many "half your disks" are or how your vdevs are configured, but the ICH10 has 6 sata ports at 300MB and one PCI port at 266MB (that's also shared with the IT8213 IDE chip) so in an ideal world your scrub bandwidth

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Orvar Korvar
I use Intel Q9450 + P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot, not PCI-x. About the HBA, I have no idea. So I had half of the drives in the AOC card, and the other half on the mobo SATA ports. Now I have all drives to the AOC card, and suddenly a scrub takes 15h instead of 8h.

Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Eric D. Mudama
On Fri, Nov 13 at 15:58, Tim Cook wrote: On Fri, Nov 13, 2009 at 2:48 PM, Orvar Korvar < knatte_fnatte_tja...@yahoo.com> wrote: Yes I do fine. How do you do-be-do-be-do? I have OpenSolaris b125 and filled a zpool with data. I did scrub on it, which took 8 hours. Some of the drives were connect

Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Tim Cook
On Fri, Nov 13, 2009 at 2:48 PM, Orvar Korvar < knatte_fnatte_tja...@yahoo.com> wrote: > Yes I do fine. How do you do-be-do-be-do? > > I have OpenSolaris b125 and filled a zpool with data. I did scrub on it, > which took 8 hours. Some of the drives were connected to the mobo, some of > the drives

Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Orvar Korvar
Yes I do fine. How do you do-be-do-be-do? I have OpenSolaris b125 and filled a zpool with data. I did scrub on it, which took 8 hours. Some of the drives were connected to the mobo, some of the drives were connected to the AOC-MV8... marvellsx88 card which is used in Thumper. Then I connected a

Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Henrik Johansson
How do you do, On 13 nov 2009, at 11.07, Orvar Korvar wrote: I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to other SATA ports, and now it takes 15h to scrub?? Why is that? Could you perhaps provid some more info? Which OSOL release? are the new disks ut

[zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Orvar Korvar
I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to other SATA ports, and now it takes 15h to scrub?? Why is that? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://ma

[zfs-discuss] Scrub restarting on Solaris 10 Update 7.

2009-06-30 Thread Ian Collins
I'm trying to scrub a pool on a backup server running Solaris 10 Update 7 and the scrub restarts each time a snap is received. I thought this was fixed in update 6? The machine was recently upgraded from update5, which did have the issue. -- Ian.

Re: [zfs-discuss] scrub on snv-b107

2009-02-17 Thread Andrew Gabriel
casper@sun.com wrote: I currently have a system with 2x1TB WDC disks; it's now running 103 and I hope to upgrade it to 108 or 109 shortly. Then we should be able to measure between a build before and after 105. It only uses around 200GB and it now takes around 1 hour to "scrub" it. I ha

Re: [zfs-discuss] scrub on snv-b107

2009-02-17 Thread Casper . Dik
>On 17 February, 2009 - dick hoogendijk sent me these 0,6K bytes: > >> On Tue, 17 Feb 2009 08:41:13 -0500 >> Blake wrote: >>=20 >> > Do you have more data on the 107 pool than on the sol10 pool? >>=20 >> 80G on the "fast" one and 85G on the slow one. >> Furthermore, on the fast one the total amou

Re: [zfs-discuss] scrub on snv-b107

2009-02-17 Thread Tomas Ögren
On 17 February, 2009 - dick hoogendijk sent me these 0,6K bytes: > On Tue, 17 Feb 2009 08:41:13 -0500 > Blake wrote: > > > Do you have more data on the 107 pool than on the sol10 pool? > > 80G on the "fast" one and 85G on the slow one. > Furthermore, on the fast one the total amount is 100G mor

Re: [zfs-discuss] scrub on snv-b107

2009-02-17 Thread dick hoogendijk
On Tue, 17 Feb 2009 08:41:13 -0500 Blake wrote: > Do you have more data on the 107 pool than on the sol10 pool? 80G on the "fast" one and 85G on the slow one. Furthermore, on the fast one the total amount is 100G more than on the slow one. So, I don't get it ;-) -- Dick Hoogendijk -- PGP/GnuPG

Re: [zfs-discuss] scrub on snv-b107

2009-02-17 Thread Blake
Do you have more data on the 107 pool than on the sol10 pool? On Tue, Feb 17, 2009 at 6:11 AM, dick hoogendijk wrote: > scrub completed after 1h9m with 0 errors on Tue Feb 17 12:09:31 2009 > > This is about twice as slow as the same srub on a solaris 10 box with a > mirrored zfs root pool. Has sc

[zfs-discuss] scrub on snv-b107

2009-02-17 Thread dick hoogendijk
scrub completed after 1h9m with 0 errors on Tue Feb 17 12:09:31 2009 This is about twice as slow as the same srub on a solaris 10 box with a mirrored zfs root pool. Has scrub become that much slower? And if so, why? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS sxce s

Re: [zfs-discuss] scrub

2008-11-16 Thread Richard Elling
dick hoogendijk wrote: > Can I do a zpool scrub on a running server without effecting > webserving / email serving? I read it is a I/O-intensive operation. > No, it is a read I/O-intensive operation :-) > Does that mean the server has to be idle? Or better still: go into > maintenance (init S)

Re: [zfs-discuss] scrub

2008-11-16 Thread Andrew Gabriel
dick hoogendijk wrote: > Can I do a zpool scrub on a running server without effecting > webserving / email serving? I read it is a I/O-intensive operation. > Does that mean the server has to be idle? Or better still: go into > maintenance (init S)? I guess not, but still.. > It used to have a r

[zfs-discuss] scrub

2008-11-16 Thread dick hoogendijk
Can I do a zpool scrub on a running server without effecting webserving / email serving? I read it is a I/O-intensive operation. Does that mean the server has to be idle? Or better still: go into maintenance (init S)? I guess not, but still.. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http:/

[zfs-discuss] Scrub is suddenly done.

2008-10-27 Thread Casper . Dik
I'm running a scrub and I'm running "zpool status" every 5 minutes. This happens: pool: export state: ONLINE scrub: scrub in progress for 1h16m, 44.91% done, 1h34m to go config: NAMESTATE READ WRITE CKSUM export ONLINE 0 0 0 c0d0s7

Re: [zfs-discuss] scrub restart patch status..

2008-10-13 Thread blake . irvin
Correct, that is a workaround. The fact that I use the beta (alpha?) zfs auto-snaphot service means that when the service checks for active scrubs, it kills the resilver. I think I will talk to Tim about modifying his method script to run the scrub check with least privileges (ie, not as root).

Re: [zfs-discuss] scrub restart patch status..

2008-10-13 Thread Richard Elling
Blake Irvin wrote: > I'm also very interested in this. I'm having a lot of pain with status > requests killing my resilvers. In the example below I was trying to test to > see if timf's auto-snapshot service was killing my resilver, only to find > that calling zpool status seems to be the issu

Re: [zfs-discuss] scrub restart patch status..

2008-10-13 Thread Blake Irvin
I'm also very interested in this. I'm having a lot of pain with status requests killing my resilvers. In the example below I was trying to test to see if timf's auto-snapshot service was killing my resilver, only to find that calling zpool status seems to be the issue: [EMAIL PROTECTED] ~]# e

[zfs-discuss] scrub restart patch status..

2008-10-13 Thread Wade . Stuart
Any news on if the scrub/resilver/snap reset patch will make it into 10/08 update? Thanks! Wade Stuart we are fallon P: 612.758.2660 C: 612.877.0385 ** Fallon has moved. Effective May 19, 2008 our address is 901 Marquette Ave, Suite 2400, Minneapolis, MN 55402. ___

Re: [zfs-discuss] scrub never finishes

2008-07-14 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 07/13/2008 11:29:07 PM: > ZFS co-inventor Matt Ahrens recently fixed this: > > 6343667 scrub/resilver has to start over when a snapshot is taken > > Trust me when I tell you that solving this correctly was much harder > than you might expect. Thanks again, Matt. > > Je

Re: [zfs-discuss] scrub never finishes

2008-07-13 Thread Jeff Bonwick
ZFS co-inventor Matt Ahrens recently fixed this: 6343667 scrub/resilver has to start over when a snapshot is taken Trust me when I tell you that solving this correctly was much harder than you might expect. Thanks again, Matt. Jeff On Sun, Jul 13, 2008 at 07:08:48PM -0700, Anil Jangity wrote:

  1   2   >