On Fri, 2009-09-11 at 13:51 -0400, Will Murnane wrote:
> On Thu, Sep 10, 2009 at 13:06, Will Murnane wrote:
> > On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld wrote:
> >>> Any suggestions?
> >>
> >> Let it run for another day.
> > I'll let it keep running as long as it wants this time.
> scrub: s
On Thu, Sep 10, 2009 at 13:06, Will Murnane wrote:
> On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld wrote:
>>> Any suggestions?
>>
>> Let it run for another day.
> I'll let it keep running as long as it wants this time.
scrub: scrub completed after 42h32m with 0 errors on Thu Sep 10 17:20:19 2009
On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld wrote:
>> Any suggestions?
>
> Let it run for another day.
I'll let it keep running as long as it wants this time.
> I suspect the combination of frequent time-based snapshots and a pretty
> active set of users causes the progress estimate to be off..
On Thu, Sep 10, 2009 at 11:11, Jonathan Edwards
wrote:
> out of curiousity - do you have a lot of small files in the filesystem?
Most of the space in the filesystem is taken by a few large files, but
most of the files in the filesystem are small. For example, I have my
recorded TV collection on t
On Sep 9, 2009, at 9:29 PM, Bill Sommerfeld wrote:
On Wed, 2009-09-09 at 21:30 +, Will Murnane wrote:
Some hours later, here I am again:
scrub: scrub in progress for 18h24m, 100.00% done, 0h0m to go
Any suggestions?
Let it run for another day.
A pool on a build server I manage takes ab
On Wed, 2009-09-09 at 21:30 +, Will Murnane wrote:
> Some hours later, here I am again:
> scrub: scrub in progress for 18h24m, 100.00% done, 0h0m to go
> Any suggestions?
Let it run for another day.
A pool on a build server I manage takes about 75-100 hours to scrub, but
typically starts
On Wed, Sep 9, 2009 at 03:27, Tim Cook wrote:
>> I left the scrub running all day:
>> scrub: scrub in progress for 67h57m, 100.00% done, 0h0m to go
>> but as you can see, it didn't finish. So, I ran pkg image-update,
>> rebooted, and am now running b122. On reboot, the scrub restarted
>> from t
On Tue, Sep 8, 2009 at 10:24 PM, Will Murnane wrote:
> I left the scrub running all day:
> scrub: scrub in progress for 67h57m, 100.00% done, 0h0m to go
> but as you can see, it didn't finish. So, I ran pkg image-update,
> rebooted, and am now running b122. On reboot, the scrub restarted
> from
I left the scrub running all day:
scrub: scrub in progress for 67h57m, 100.00% done, 0h0m to go
but as you can see, it didn't finish. So, I ran pkg image-update,
rebooted, and am now running b122. On reboot, the scrub restarted
from the beginning, and currently estimates 17h to go. I'll post an
On Mon, Sep 7, 2009 at 15:59, Henrik Johansson wrote:
> Hello Will,
> On Sep 7, 2009, at 3:42 PM, Will Murnane wrote:
>
> What can cause this kind of behavior, and how can I make my pool
> finish scrubbing?
>
>
> No idea what is causing this but did you try to stop the scrub?
I haven't done so yet
On Mon, Sep 7, 2009 at 12:05, Chris Gerhard wrote:
> Looks like this bug:
>
> http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
>
> Workaround: Don't run zpool status as root.
I'm not, and yet the scrub continues. To be more specific, here's a
complete current interaction with zpool status:
Hello Will,
On Sep 7, 2009, at 3:42 PM, Will Murnane wrote:
What can cause this kind of behavior, and how can I make my pool
finish scrubbing?
No idea what is causing this but did you try to stop the scrub? If so
what happened? (Might not be a good idea since this is not a normal
state?
Looks like this bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
Workaround: Don't run zpool status as root.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Will Murnane
Sent: 7. syyskuuta 2009 16:42
To: ZFS Mailing List
Subject: [zfs-discuss] This is the scrub that never ends...
I have a pool composed of a single raidz2 vdev, which is
I have a pool composed of a single raidz2 vdev, which is currently
degraded (missing a disk):
config:
NAME STATE READ WRITE CKSUM
pool DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
c8d1 ONLINE 0 0 0
15 matches
Mail list logo