Stuart Anderson writes:
>
> On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote:
>
> >
> >
> > On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
> > > > wrote:
> >
> > However, it is a bit disconcerting to have to run with reduced data
> > protection for an entire week. While I am certai
On Jun 23, 2009, at 11:50 AM, Richard Elling wrote:
(2) is there some reasonable way to read in multiples of these
blocks in a single IOP? Theoretically, if the blocks are in
chronological creation order, they should be (relatively)
sequential on the drive(s). Thus, ZFS should be able
On 23-Jun-09, at 1:58 PM, Erik Trimble wrote:
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly
_how_ does ZFS do resilvering? Both in the case of mirrors, and
of RAIDZ[2] ?
I've seen some mention that it goes in cronological order
Erik Trimble wrote:
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly
_how_ does ZFS do resilvering? Both in the case of mirrors, and of
RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to
me, means that the
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly _how_
does ZFS do resilvering? Both in the case of mirrors, and of RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to
me, means that the metadata must be read
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly _how_
does ZFS do resilvering? Both in the case of mirrors, and of RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to
me, means that the metadata must be read first) of file creatio
All this discussion hasn't answered one thing for me: exactly _how_
does ZFS do resilvering? Both in the case of mirrors, and of RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to me,
means that the metadata must be read first) of file creation, and that
only use
On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote:
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson > wrote:
However, it is a bit disconcerting to have to run with reduced data
protection for an entire week. While I am certainly not going back to
UFS, it seems like it should be at least theo
On Mon, 2009-06-22 at 06:06 -0700, Richard Elling wrote:
> Nevertheless, in my lab testing, I was not able to create a random-enough
> workload to not be write limited on the reconstructing drive. Anecdotal
> evidence shows that some systems are limited by the random reads.
Systems I've run which
Stuart Anderson wrote:
On Jun 21, 2009, at 8:57 PM, Richard Elling wrote:
Stuart Anderson wrote:
It is currently taking ~1 week to resilver an x4500 running S10U6,
recently patched with~170M small files on ~170 datasets after a
disk failure/replacement, i.e.,
wow, that is impressive. There
Nicholas Lee wrote:
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
mailto:ander...@ligo.caltech.edu>> wrote:
However, it is a bit disconcerting to have to run with reduced data
protection for an entire week. While I am certainly not going back to
UFS, it seems like it should b
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
wrote:
>
> However, it is a bit disconcerting to have to run with reduced data
> protection for an entire week. While I am certainly not going back to
> UFS, it seems like it should be at least theoretically possible to do this
> several orders of m
On Jun 21, 2009, at 8:57 PM, Richard Elling wrote:
Stuart Anderson wrote:
It is currently taking ~1 week to resilver an x4500 running S10U6,
recently patched with~170M small files on ~170 datasets after a
disk failure/replacement, i.e.,
wow, that is impressive. There is zero chance of doing
Stuart Anderson wrote:
It is currently taking ~1 week to resilver an x4500 running S10U6,
recently patched with~170M small files on ~170 datasets after a
disk failure/replacement, i.e.,
wow, that is impressive. There is zero chance of doing that with a
manageable number of UFS file systems.
It is currently taking ~1 week to resilver an x4500 running S10U6,
recently patched with~170M small files on ~170 datasets after a
disk failure/replacement, i.e.,
scrub: resilver in progress for 53h47m, 30.72% done, 121h19m to go
Is there anything that can be tuned to improve this performance,
15 matches
Mail list logo