Sorry in advance if this has already been discussed, but I did not
find it in my archives of the list.

        According to the ZFS documentation, a resilver operation
includes what is effectively a dirty region log (DRL) so that if the
resilver is interrupted, by a snapshot or reboot, the resilver can
continue where it left off. Unfortunately, that has not been what I
have observed. Our configuration consists of many 512 GB LUNs
presented from a hardware RAID array. These are used in mirrored pairs
(each half on a different array) to build up data volumes for end
users. Some of the sets of data are small (one 512 GB pair is more
than sufficient), while others are large (11 mirrored pairs for about
5.5 TB). We take snapshots for rapid recovery of user damaged files,
and backups using NetBackup as often as we can for DR. When a snapshot
happens, it appears that any outstanding resilver operations restart
at the beginning. I have seen this with both 10U3 and 10U6. Is this a
zpool version 4 problem that upgrading the zpools to version 10 will
fix (they started on the 10U3 system and moved to a 10U6 system
recently) ? If not, what is the solution to be able to take snapshots
more frequently than the time to resilver a vdev ?

Sample `zpool status | grep progress` run at 10 minute intervals,
snapshots start being taken at 00:00 but it takes a while to walk
through all the zpools and process and take the snapshots (we clean up
old snapshots by removing any over 5 weeks old). Notes in line ...

2 resilvers happily moving along, they had been snapped at 06:00...

 Wednesday, April 29, 2009 11:50:08 PM EDT
 scrub: resilver in progress for 17h47m, 98.44% done, 0h16m to go
 scrub: resilver in progress for 17h40m, 45.17% done, 21h27m to go
Wednesday, April 29, 2009 11:55:09 PM EDT
 scrub: resilver in progress for 17h52m, 98.44% done, 0h16m to go
 scrub: resilver in progress for 17h45m, 45.36% done, 21h23m to go

the resilver that was at 98+% seems to go away (complete), but no,
that zpool was just snapped

Thursday, April 30, 2009 12:00:10 AM EDT
 scrub: resilver in progress for 17h50m, 45.55% done, 21h19m to go
Thursday, April 30, 2009 12:05:11 AM EDT
 scrub: resilver in progress for 17h55m, 45.75% done, 21h15m to go

and now it is starting the resilver operation all over, as of 9:00 it
is about 21% done

Thursday, April 30, 2009 12:10:11 AM EDT
 scrub: resilver in progress for 0h1m, 0.05% done, 38h30m to go
 scrub: resilver in progress for 18h0m, 45.93% done, 21h11m to go
Thursday, April 30, 2009 12:15:12 AM EDT
 scrub: resilver in progress for 0h6m, 0.05% done, 201h47m to go
 scrub: resilver in progress for 18h5m, 46.11% done, 21h8m to go
Thursday, April 30, 2009 12:20:13 AM EDT
 scrub: resilver in progress for 0h11m, 0.13% done, 146h1m to go
 scrub: resilver in progress for 18h10m, 46.28% done, 21h5m to go

now the sript gets around to snapping the zpool that was at 46+% done

Thursday, April 30, 2009 12:25:14 AM EDT
 scrub: resilver in progress for 0h16m, 0.26% done, 104h21m to go
Thursday, April 30, 2009 12:30:14 AM EDT
 scrub: resilver in progress for 0h21m, 0.26% done, 136h39m to go

and it appears to restart from the beginning, as of 9:00 it is about 14% done

Thursday, April 30, 2009 12:35:15 AM EDT
 scrub: resilver in progress for 0h26m, 0.26% done, 168h56m to go
 scrub: resilver in progress for 0h0m, 0.00% done, 18955036169h5m to go
Thursday, April 30, 2009 12:40:16 AM EDT
 scrub: resilver in progress for 0h31m, 0.26% done, 201h13m to go
 scrub: resilver in progress for 0h0m, 0.02% done, 17h56m to go
Thursday, April 30, 2009 12:45:17 AM EDT
 scrub: resilver in progress for 0h36m, 0.26% done, 233h30m to go
 scrub: resilver in progress for 0h5m, 0.03% done, 247h36m to go

This is on the 10U6 system. Previously on the 10U3 system I had to
stop the snaps and let the resilvers complete or they would never
complete (I watched them do this for over a week).

Here is the status as of about 9:00:

pkr...@xxxxxx:/home/pkraus> zpool status 11111
  pool: 11111
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 8h53m, 21.64% done, 32h13m to go
config:

        NAME                                         STATE     READ WRITE CKSUM
        antitrust                                    DEGRADED     0     0     0
          mirror                                     DEGRADED     0     0     0
            replacing                                DEGRADED     0     0     0
              c5t600015D0000602000000000000008212d0  FAULTED      0
 0     0  too many errors
              c5t600C0FF00000000009278536638D9B03d0  ONLINE       0     0     0
            c5t600015D000060200000000000000A567d0    ONLINE       0     0     0
          mirror                                     ONLINE       0     0     0
            c5t600015D00006020000000000000083AFd0    ONLINE       0     0     0
            c5t600015D000060200000000000000A56Ad0    ONLINE       0     0     0
          mirror                                     ONLINE       0     0     0
            c5t600015D0000602000000000000008401d0    ONLINE       0     0     0
            c5t600015D000060200000000000000A56Ed0    ONLINE       0     0     0
          mirror                                     ONLINE       0     0     0
            c5t600015D00006020000000000000084CFd0    ONLINE       0     0     0
            c5t600015D000060200000000000000A572d0    ONLINE       0     0     0
          mirror                                     ONLINE       0     0     0
            c5t600015D000060200000000000000850Fd0    ONLINE       0     0     0
            c5t600015D000060200000000000000A575d0    ONLINE       0     0     0
          mirror                                     ONLINE       0     0     0
            c5t600015D00006020000000000000084FAd0    ONLINE       0     0     0
            c5t600015D000060200000000000000A578d0    ONLINE       0     0     0
          mirror                                     ONLINE       0     0     0
            c5t600015D00006020000000000000084E3d0    ONLINE       0     0     0
            c5t600015D000060200000000000000A57Cd0    ONLINE       0     0     0
          mirror                                     ONLINE       0     0     0
            c5t600015D0000602000000000000008523d0    ONLINE       0     0     0
            c5t600015D000060200000000000000A57Fd0    ONLINE       0     0     0
          mirror                                     ONLINE       0     0     0
            c5t600015D000060200000000000000A655d0    ONLINE       0     0     0
            c5t600015D000060200000000000000A659d0    ONLINE       0     0     0

errors: No known data errors
pkr...@xxxxxx:/home/pkraus> zpool status 22222
  pool: 22222
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scrub: resilver in progress for 8h27m, 13.96% done, 52h9m to go
config:

        NAME                                          STATE     READ WRITE CKSUM
        investment_protection                         DEGRADED     0     0     0
          mirror                                      DEGRADED     0     0     0
            replacing                                 DEGRADED     0     0     0
              c10t600015D00006020000000000000081F8d0  UNAVAIL      0     0     0
  cannot open
              c5t600C0FF0000000000927854ADE3ED105d0   ONLINE       0     0     0
            c5t600015D000060200000000000000A515d0     ONLINE       0     0     0
          mirror                                      ONLINE       0     0     0
            c5t600015D00006020000000000000083EDd0     ONLINE       0     0     0
            c5t600C0FF00000000009278536638D9B02d0     ONLINE       0     0     0
          mirror                                      ONLINE       0     0     0
            c5t600015D00006020000000000000083D9d0     ONLINE       0     0     0
            c5t600C0FF0000000000927854ADE3ED102d0     ONLINE       0     0     0
          mirror                                      DEGRADED     0     0     0
            c5t600015D0000602000000000000008455d0     ONLINE       0     0     0
            c10t600015D000060200000000000000A549d0    UNAVAIL      0     0     0
  cannot open
          mirror                                      ONLINE       0     0     0
            c5t600015D00006020000000000000084B7d0     ONLINE       0     0     0
            c5t600015D000060200000000000000A54Cd0     ONLINE       0     0     0
          mirror                                      DEGRADED     0     0     0
            c5t600015D00006020000000000000084A2d0     ONLINE       0     0     0
            c10t600015D000060200000000000000A550d0    UNAVAIL      0     0     0
  cannot open
          mirror                                      DEGRADED     0     0     0
            c5t600015D000060200000000000000848Ed0     ONLINE       0     0     0
            c10t600015D000060200000000000000A55Ed0    UNAVAIL      0     0     0
  cannot open
          mirror                                      DEGRADED     0     0     0
            c5t600015D0000602000000000000008538d0     ONLINE       0     0     0
            c10t600015D000060200000000000000A564d0    UNAVAIL      0     0     0
  cannot open
          mirror                                      DEGRADED     0     0     0
            c5t600015D0000602000000000000008551d0     ONLINE       0     0     0
            c10t600015D000060200000000000000A561d0    UNAVAIL      0     0     0
  cannot open
          mirror                                      DEGRADED     0     0     0
            c10t600015D000060200000000000000A6B4d0    UNAVAIL      0     0     0
  cannot open
            c5t600015D000060200000000000000A6BAd0     ONLINE       0     0     0
          mirror                                      DEGRADED     0     0     0
            c5t600015D000060200000000000000A6BEd0     ONLINE       0     0     0
            c10t600015D000060200000000000000A6C1d0    UNAVAIL      0     0     0
  cannot open

errors: No known data errors
pkr...@xxxxxx:/home/pkraus>

The 2nd zpool clearly has multiple failures and we are replacing them
one by one, would we be better off replacing them all at once ?

This is an active system with both backup jobs and end users
generating I/O on an almost continuous basis.

-- 
{--------1---------2---------3---------4---------5---------6---------7---------}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Lame Duck Business Manager, Delta-Xi cast of Alpha-Psi-Omega @ RPI
-> Technical Advisor, Lunacon 2010 (http://www.lunacon.org/)
-> Technical Advisor, RPI Players
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to