Am 14.06.11 15:12, schrieb Rasmus Fauske:
Den 14.06.2011 14:06, skrev Edward Ned Harvey:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Rasmus Fauske

I want to replace some slow consumer drives with new edc re4 ones but
when I do a replace it needs to scan the full pool and not only that
disk set (or just the old drive)

Is this normal ? (the speed is always slow in the start so thats not
what I am wondering about, but that it needs to scan all of my 18.7T to
replace one drive)

The disk config:
    pool: tank
   state: ONLINE
status: One or more devices is currently being resilvered. The pool will
          continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
   scan: resilver in progress since Tue Jun 14 10:03:37 2011
16.8G scanned out of 18.7T at 1.52M/s, (scan is slow, no estimated
time)
      388M resilvered, 0.09% done
config:

          NAME             STATE     READ WRITE CKSUM
          tank             ONLINE       0     0     0
            raidz2-1       ONLINE       0     0     0
              c10t21d0     ONLINE       0     0     0
              replacing-1  ONLINE       0     0     0
                c10t35d0   ONLINE       0     0     0
                c10t22d0   ONLINE       0     0     0  (resilvering)
              c10t23d0     ONLINE       0     0     0
              c10t24d0     ONLINE       0     0     0
              c10t25d0     ONLINE       0     0     0
              c10t26d0     ONLINE       0     0     0
              c10t27d0     ONLINE       0     0     0
Only this raidz2 vdev is resilvering.  The other raidz2-vdev's are idle.
You can verify this with the command:
    zpool iostat 30

Yes but still it wants to scan all of the data in the pool:

action: Wait for the resilver to complete.
    scan: resilver in progress since Tue Jun 14 10:03:37 2011
16.8G scanned out of 18.7T at 1.52M/s, (scan is slow, no estimated time)

Each raidz is 7 x 1TB disks, around 50% filled. So should it not only scan that data and not the full pool that holds around 18T ?
This is what I experienced as well. I had a zpool made out of 16 mirrors and in that zpool 12 mirrors were made out of 1 TB drives, while 4 mirrors were made out of 2 TB drives. When I started to swap out the 1 TB drives with 2 TB ones, zfs just didn't read from the corresponding drive of the mirrored vdev, but from all drives in the zpool.

iostat showed that all disks were nearly equally busy. The scan started very slow and picked up speed after some time. Interestingly, I was able to swap a number 1 TB drives with 2 TB ones - I think I had up to 8 drives at once in progress, so I was able to complete the whole task in less than 48 hrs.


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to