On Mon, May 17, 2010 at 03:12:44PM -0700, Erik Trimble wrote:
> On Mon, 2010-05-17 at 12:54 -0400, Dan Pritts wrote:
> > On Mon, May 17, 2010 at 06:25:18PM +0200, Tomas Ögren wrote:
> > > Resilver does a whole lot of random io itself, not bulk reads.. It reads
> > > the filesystem tree, not "block 0, block 1, block 2..". You won't get
> > > 60MB/s sustained, not even close.
> > 
> > Even with large, unfragmented files?  
> > 
> > danno
> > --
> > Dan Pritts, Sr. Systems Engineer
> > Internet2
> > office: +1-734-352-4953 | mobile: +1-734-834-7224
> 
> Having large, unfragmented files will certainly help keep sustained
> throughput.  But, also, you have to consider the amount of deletions
> done on the pool.
> 
> For instance, let's say you wrote files A, B, and C one right after
> another, and they're all big files.  Doing a re-silver, you'd be pretty
> well off on getting reasonable throughput reading A, then B, then C,
> since they're going to be contiguous on the drive (both internally, and
> across the three files).  However, if you have deleted B at some time,
> and say wrote a file D (where D < B in size) into B's old space, then,
> well, you seek to A, read A, seek forward to C, read C, seek back to D,
> etc.
> 
> Thus, you'll get good throughput for resilver on these drives pretty
> much in just ONE case:  large files with NO deletions.  If you're using
> them for write-once/read-many/no-delete archives, then you're OK.
> Anything else is going to suck.
> 
> :-)
> 

So basicly if you have a lot of small files with a lot of changes
and deletions resilver is going to be really slow.

Sounds like the traditional RAID would be better/faster to rebuild in this 
case..

-- Pasi

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to