+1

On Wed, Jul 28, 2010 at 6:11 PM, Robert Milkowski <mi...@task.gda.pl> wrote:

>
> fyi
>
> --
> Robert Milkowski
> http://milek.blogspot.com
>
>
> -------- Original Message --------  Subject: zpool import despite missing
> log [PSARC/2010/292 Self Review]  Date: Mon, 26 Jul 2010 08:38:22 -0600  From:
> Tim Haley <tim.ha...@oracle.com> <tim.ha...@oracle.com>  To:
> psarc-...@sun.com  CC: zfs-t...@sun.com
>
> I am sponsoring the following case for George Wilson.  Requested binding
> is micro/patch.  Since this is a straight-forward addition of a command
> line option, I think itqualifies for self review.  If an ARC member
> disagrees, let me know and I'll convert to a fast-track.
>
> Template Version: @(#)sac_nextcase 1.70 03/30/10 SMI
> This information is Copyright (c) 2010, Oracle and/or its affiliates.
> All rights reserved.
> 1. Introduction
>     1.1. Project/Component Working Name:
>          zpool import despite missing log
>     1.2. Name of Document Author/Supplier:
>          Author:  George Wilson
>     1.3  Date of This Document:
>         26 July, 2010
>
> 4. Technical Description
>
> OVERVIEW:
>
>          ZFS maintains a GUID (global unique identifier) on each device and
>          the sum of all GUIDs of a pool are stored into the ZFS uberblock.
>          This sum is used to determine the availability of all vdevs
>          within a pool when a pool is imported or opened.  Pools which
>          contain a separate intent log device (e.g. a slog) will fail to
>          import when that device is removed or is otherwise unavailable.
>          This proposal aims to address this particular issue.
>
> PROPOSED SOLUTION:
>
>          This fast-track introduce a new command line flag to the
>          'zpool import' sub-command.  This new option, '-m', allows
>          pools to import even when a log device is missing.  The contents
>          of that log device are obviously discarded and the pool will
>          operate as if the log device were offlined.
>
> MANPAGE DIFFS:
>
>        zpool import [-o mntopts] [-p property=value] ... [-d dir | -c
> cachefile]
> -          [-D] [-f] [-R root] [-n] [-F] -a
> +          [-D] [-f] [-m] [-R root] [-n] [-F] -a
>
>
>        zpool import [-o mntopts] [-o property=value] ... [-d dir | -c
> cachefile]
> -          [-D] [-f] [-R root] [-n] [-F] pool |id [newpool]
> +          [-D] [-f] [-m] [-R root] [-n] [-F] pool |id [newpool]
>
>        zpool import [-o mntopts] [ -o property=value] ... [-d dir |
> -     -c cachefile] [-D] [-f] [-n] [-F] [-R root] -a
> +     -c cachefile] [-D] [-f] [-m] [-n] [-F] [-R root] -a
>
>            Imports all  pools  found  in  the  search  directories.
>            Identical to the previous command, except that all pools
>
> +         -m
> +
> +            Allows a pool to import when there is a missing log device
>
> EXAMPLES:
>
> 1). Configuration with a single intent log device:
>
> # zpool status tank
>    pool: tank
>     state: ONLINE
>      scan: none requested
>      config:
>
>              NAME        STATE     READ WRITE CKSUM
>              tank        ONLINE       0     0     0
>                c7t0d0    ONLINE       0     0     0
>              logs
>                c5t0d0    ONLINE       0     0     0
>
> errors: No known data errors
>
> # zpool import tank
> The devices below are missing, use '-m' to import the pool anyway:
>              c5t0d0 [log]
>
> cannot import 'tank': one or more devices is currently unavailable
>
> # zpool import -m tank
> # zpool status tank
>    pool: tank
>   state: DEGRADED
> status: One or more devices could not be opened.  Sufficient replicas
> exist for
>          the pool to continue functioning in a degraded state.
> action: Attach the missing device and online it using 'zpool online'.
>     see: http://www.sun.com/msg/ZFS-8000-2Q
>    scan: none requested
> config:
>
>          NAME                   STATE     READ WRITE CKSUM
>          tank                   DEGRADED     0     0     0
>            c7t0d0               ONLINE       0     0     0
>          logs
>            1693927398582730352  UNAVAIL      0     0     0  was
> /dev/dsk/c5t0d0
>
> errors: No known data errors
>
> 2). Configuration with mirrored intent log device:
>
> # zpool add tank log mirror c5t0d0 c5t1d0
> zr...@diskmonster:/dev/dsk# zpool status tank
>    pool: tank
>   state: ONLINE
>    scan: none requested
> config:
>
>          NAME        STATE     READ WRITE CKSUM
>          tank        ONLINE       0     0     0
>            c7t0d0    ONLINE       0     0     0
>          logs
>            mirror-1  ONLINE       0     0     0
>              c5t0d0  ONLINE       0     0     0
>              c5t1d0  ONLINE       0     0     0
>
> errors: No known data errors
>
> # zpool import 429789444028972405
> The devices below are missing, use '-m' to import the pool anyway:
>              mirror-1 [log]
>                c5t0d0
>                c5t1d0
>
> # zpool import -m tank
> # zpool status tank
>    pool: tank
>   state: DEGRADED
> status: One or more devices could not be opened.  Sufficient replicas
> exist for
>          the pool to continue functioning in a degraded state.
> action: Attach the missing device and online it using 'zpool online'.
>     see: http://www.sun.com/msg/ZFS-8000-2Q
>    scan: none requested
> config:
>
>          NAME                      STATE     READ WRITE CKSUM
>          tank                      DEGRADED     0     0     0
>            c7t0d0                  ONLINE       0     0     0
>          logs
>            mirror-1                UNAVAIL      0     0     0
> insufficient replicas
>              46385995713041169     UNAVAIL      0     0     0  was
> /dev/dsk/c5t0d0
>              13821442324672734438  UNAVAIL      0     0     0  was
> /dev/dsk/c5t1d0
>
> errors: No known data errors
>
> 6. Resources and Schedule
>     6.4. Steering Committee requested information
>         6.4.1. Consolidation C-team Name:
>                 ON
>     6.5. ARC review type: Automatic
>     6.6. ARC Exposure: open
>
> _______________________________________________
> opensolaris-arc mailing listopensolaris-...@opensolaris.org
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to