On Tue, 19 May 2009, Eric Schrock wrote: > The latter half of the above statement is also incorrect. Should you > find yourself in the double-failure described above, you will get an FMA > fault that describes the nature of the problem and the implications. If > the slog is truly dead, you can 'zpool clear' (or 'fmadm repair') the > fault and use whatever data you still have in the pool. If the slog is > just missing, you can insert it and continue without losing data. In no > cases will ZFS silently continue without committed data.
How about the case where a slog device dies while a pool is not active? I created a pool with one mirror pair and a slog, and then intentionally corrupted the slog while the pool was exported (dd if=/dev/zero of=/dev/dsk/<slog>), and the pool is now inaccessible: ----- r...@s10 ~ # zpool import pool: export id: 7254558150370674682 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-6X config: export UNAVAIL missing device mirror ONLINE c0t1d0 ONLINE c0t2d0 ONLINE Additional devices are known to be part of this pool, though their exact configuration cannot be determined. ----- zpool clear doesn't help: ----- r...@s10 ~ # zpool clear export cannot open 'export': no such pool ----- and there's no fault logged: ----- r...@s10 ~ # fmdump TIME UUID SUNW-MSG-ID fmdump: /var/fm/fmd/fltlog is empty ----- How do you recover from this scenario? BTW, you don't happen to have any insight on why slog removal hasn't been implemented yet? -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | hen...@csupomona.edu California State Polytechnic University | Pomona CA 91768 _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss