>>>>> "t" == Tim  <[EMAIL PROTECTED]> writes:

     >> a fourth 500gb disk and add
     >> it to the pool as the second vdev, what happens when that
     >> fourth disk has a hardware failure?

     t> If you set copies=2, and get lucky enough
     t> that copies of every block on the standalone are copied to the
     t> raidz vdev, you might be able to survive,

no, of course you won't survive.  Just try it with file vdev's before
pontificating about it.

-----8<----
bash-3.00# mkfile 64m t0
bash-3.00# mkfile 64m t1
bash-3.00# mkfile 64m t2
bash-3.00# mkfile 64m t00
bash-3.00# pwd -P
/usr/export
bash-3.00# zpool create foolpool raidz1 /usr/export/t{0..2}
bash-3.00# zpool add foolpool /usr/export/t00
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is file
bash-3.00# zpool add -f foolpool /usr/export/t00
bash-3.00# zpool status -v foolpool
  pool: foolpool
 state: ONLINE
 scrub: none requested
config:

        NAME                STATE     READ WRITE CKSUM
        foolpool            ONLINE       0     0     0
          raidz1            ONLINE       0     0     0
            /usr/export/t0  ONLINE       0     0     0
            /usr/export/t1  ONLINE       0     0     0
            /usr/export/t2  ONLINE       0     0     0
          /usr/export/t00   ONLINE       0     0     0

errors: No known data errors
bash-3.00# zfs set copies=2 foolpool
bash-3.00# cd /
bash-3.00# pax -rwpe sbin foolpool/
bash-3.00# > /usr/export/t00
bash-3.00# pax -w foolpool/ > /dev/null
bash-3.00# zpool status -v foolpool
  pool: foolpool
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:

        NAME                STATE     READ WRITE CKSUM
        foolpool            DEGRADED     4     0    21
          raidz1            ONLINE       0     0     0
            /usr/export/t0  ONLINE       0     0     0
            /usr/export/t1  ONLINE       0     0     0
            /usr/export/t2  ONLINE       0     0     0
          /usr/export/t00   DEGRADED     4     0    21  too many errors

errors: No known data errors
bash-3.00# zpool offline foolpool /usr/export/t00
cannot offline /usr/export/t00: no valid replicas
bash-3.00# zpool export foolpool
panic[cpu0]/thread=2a1016b7ca0: assertion failed: vdev_config_sync(rvd, txg) == 
0, file: ../../common/fs/zfs/spa.c, line: 3125

000002a1016b7850 genunix:assfail+78 (7b72c668, 7b72b680, c35, 183d800, 1285c00, 
0)
  %l0-3: 0000000000000422 0000000000000081 000003001df5e580 0000000070170880
  %l4-7: 0000060016076c88 0000000000000000 0000000001887800 0000000000000000
000002a1016b7900 zfs:spa_sync+244 (3001df5e580, 42, 30043434e30, 7b72c400, 
7b72b400, 4)
  %l0-3: 0000000000000000 000003001df5e6b0 000003001df5e670 00000600155cce80
  %l4-7: 0000030056703040 0000060013659200 000003001df5e708 00000000018c2e98
000002a1016b79c0 zfs:txg_sync_thread+120 (60013659200, 42, 2a1016b7a70, 
60013659320, 60013659312, 60013659310)
  %l0-3: 0000000000000000 00000600136592d0 00000600136592d8 0000060013659316
  %l4-7: 0000060013659314 00000600136592c8 0000000000000043 0000000000000042

syncing file systems... done
[...first reboot...]
WARNING: ZFS replay transaction error 30, dataset boot/usr, seq 0x134c, txtype 9

NOTICE: iscsi: SendTarget discovery failed (11)         [``patiently waits'' 
forver]

~#Type  'go' to resume
{0} ok boot -m milestone=none
Resetting ...
[...second reboot...]

# /sbin/mount -o remount,rw /
# /sbin/mount /usr
# iscsiadm remove discovery-address 10.100.100.135
# iscsiadm remove discovery-address 10.100.100.138
# cd /usr/export 
# mkdir hide
# mv t0 t1 t2 t00 hide
mv: cannot access t00                                   [haha ZFS.]
# sync
# reboot
syncing file systems... done
[...third reboot...]
SunOS Release 5.11 Version snv_71 64-bit
Copyright 1983-2007 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
NOTICE: mddb: unable to get devid for 'sd', 0xffffffff
NOTICE: mddb: unable to get devid for 'sd', 0xffffffff
NOTICE: mddb: unable to get devid for 'sd', 0xffffffff
WARNING: /[EMAIL PROTECTED],700000/[EMAIL PROTECTED],3 (ohci0): Connecting 
device on port 4 failed
Hostname: terabithia.th3h.inner.chaos
/usr/sbin/pmconfig: "/etc/power.conf" line 37, cannot find ufs mount point for 
"/usr/.CPR"
Reading ZFS config: done.
Mounting ZFS filesystems: (9/9)

terabithia.th3h.inner.chaos console login: root
Password:
Nov 20 13:09:30 terabithia.th3h.inner.chaos login: ROOT LOGIN /dev/console
Last login: Mon Aug 18 03:04:12 on console
Sun Microsystems Inc.   SunOS 5.11      snv_71  October 2007
You have new mail.
# exec bash
bash-3.00# zpool status -v foolpool
  pool: foolpool
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
        replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config: 

        NAME                STATE     READ WRITE CKSUM
        foolpool            UNAVAIL      0     0     0  insufficient replicas
          raidz1            UNAVAIL      0     0     0  insufficient replicas
            /usr/export/t0  UNAVAIL      0     0     0  cannot open
            /usr/export/t1  UNAVAIL      0     0     0  cannot open
            /usr/export/t2  UNAVAIL      0     0     0  cannot open
          /usr/export/t00   UNAVAIL      0     0     0  cannot open
bash-3.00# cd /usr/export
bash-3.00# mv hide/* .
bash-3.00# zpool clear foolpool
cannot open 'foolpool': pool is unavailable
bash-3.00# zpool status -v foolpool
  pool: foolpool
 state: UNAVAIL
status: One or more devices could not be used because the label is missing
        or invalid.  There are insufficient replicas for the pool to continue
        functioning.
action: Destroy and re-create the pool from a backup source.
   see: http://www.sun.com/msg/ZFS-8000-5E
 scrub: none requested
config: 

        NAME                STATE     READ WRITE CKSUM
        foolpool            UNAVAIL      0     0     0  insufficient replicas
          raidz1            ONLINE       0     0     0
            /usr/export/t0  ONLINE       0     0     0
            /usr/export/t1  ONLINE       0     0     0
            /usr/export/t2  ONLINE       0     0     0
          /usr/export/t00   UNAVAIL      0     0     0  corrupted data
bash-3.00# zpool export foolpool
bash-3.00# zpool import -d /usr/export foolpool
internal error: Value too large for defined data type
Abort (core dumped)
bash-3.00# rm core
bash-3.00# zpool import -d /usr/export
  pool: foolpool
    id: 8355048046034000632
 state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

        foolpool            UNAVAIL  insufficient replicas
          raidz1            ONLINE
            /usr/export/t0  ONLINE
            /usr/export/t1  ONLINE
            /usr/export/t2  ONLINE
          /usr/export/t00   UNAVAIL  corrupted data
bash-3.00# ls -l
total 398218
drwxr-xr-x   2 root     root           2 Nov 20 13:10 hide
drwxr-xr-x   6 root     root           6 Oct  5 02:04 nboot
-rw------T   1 root     root     67108864 Nov 20 12:50 t0
-rw------T   1 root     root     67028992 Nov 20 12:50 t00  [<- lol*2.  where 
did THAT come from?
-rw------T   1 root     root     67108864 Nov 20 12:50 t1       and slightly 
too small, see?]
-rw------T   1 root     root     67108864 Nov 20 12:50 t2
-----8<----

     t> They may both be on one disk, or they may not.  This is more
     t> to protect against corrupt blocks if you only have a single
     t> drive, than against losing an entire drive.

It's not because there is some data which is not spread across both
drives.  It's because ZFS won't let you get at ANY of the data, even
what's spread, because of a variety of sanity checks, core dumps, and
kernel panics.  copies=2 is a really silly feature, IMHO.

Attachment: pgpZKieKIvyXg.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to