comment at bottom...

Miles Nordin wrote:
>>>>>> "mb" == Matt Beebe <[EMAIL PROTECTED]> writes:
>>>>>>             
>
>     mb> Anyone know of a SATA and/or SAS HBA with battery backed write
>     mb> cache?
>
> I've never heard of a battery that's used for anything but RAID
> features.  It's an interesting question, if you use the controller in
> ``JBOD mode'' will it use the write cache or not?  I would guess not,
> but it might.  And if it doesn't, can you force it, even by doing
> sneaky things like making 2-disk mirrors where 1 disk happens to be
> missing thus wasting half the ports you bought, but turning on the
> damned write cache?  I don't know.
>
> The alternative is to get a battery-backed SATA slog like the gigabyte
> iram.  However, beware, because once you add a slog to a pool, you can
> never remove it.  You can't improt the pool without the slog, not even
> DEGRADED, not even if you want ZFS to pretend the slog is empty, not
> even if the slog actually was empty.  IIRC (might be confused) Ross
> found the pool will mount at boot without the slog if it's listed in
> zpool.cache (why?  don't know, but I think he said it does), but once
> you export the pool there is no way to get it back into zpool.cache
> since zpool.cache is a secret binary config file.  Can you substitute
> any empty device for the missing slog?  nope---the slog has secret
> binary header label on it.
>
> I'm guessing one of the reasons you wanted a non-RAID controller with
> a write cache was so that if the controller failed, and the exact same
> model wasn't available to replace it, most of your pool would still be
> readable with any random controller, modulo risk of corruption from
> the lost write cache.  so...with the slog, you don't have that,
> because there are magic irreplaceable bits stored on the slog without
> which your whole pool is useless.
>
> bash-3.00# zpool import -d /usr/vdev
>   pool: slogtest
>     id: 11808644862621052048
>  state: ONLINE
> action: The pool can be imported using its name or numeric identifier.
> config:
>
>         slogtest          ONLINE
>           mirror          ONLINE
>             /usr/vdev/d0  ONLINE
>             /usr/vdev/d1  ONLINE
>         logs
>         slogtest          ONLINE
>           /usr/vdev/slog  ONLINE
> bash-3.00# mv vdev/slog .
> bash-3.00# zpool import -d /usr/vdev
>   pool: slogtest
>     id: 11808644862621052048
>  state: FAULTED
> status: One or more devices are missing from the system.
> action: The pool cannot be imported. Attach the missing
>         devices and try again.
>    see: http://www.sun.com/msg/ZFS-8000-6X
> config:
>
>         slogtest          UNAVAIL  missing device
>           mirror          ONLINE
>             /usr/vdev/d0  ONLINE
>             /usr/vdev/d1  ONLINE
>
>         Additional devices are known to be part of this pool, though their
>         exact configuration cannot be determined.
> bash-3.00# 
>
> damn.  ``no user-serviceable parts inside.''  however, if you were
> sneaky enough to save a backup copy of your empty slog to get around
> Solaris's obtinence, maybe you can proceed:
>
> bash-3.00# gzip slog                            <-- save a copy of the 
> exported empty slog
> bash-3.00# ls -l slog.gz
> -rw-r--r--   1 root     root      106209 Sep  3 16:17 slog.gz
> bash-3.00# gunzip < slog.gz > vdev/slog
> bash-3.00# zpool import -d /usr/vdev
>   pool: slogtest
>     id: 11808644862621052048
>  state: ONLINE
> action: The pool can be imported using its name or numeric identifier.
> config:
>
>         slogtest          ONLINE
>           mirror          ONLINE
>             /usr/vdev/d0  ONLINE
>             /usr/vdev/d1  ONLINE
>         logs
>         slogtest          ONLINE
>           /usr/vdev/slog  ONLINE
> bash-3.00# zpool import -d /usr/vdev slogtest
> bash-3.00# pax -rwpe /usr/sfw/bin /slogtest
> ^C
> bash-3.00# zpool export slogtest
> bash-3.00# gunzip < slog.gz > vdev/slog          <-- wipe the slog
> bash-3.00# zpool import -d /usr/vdev slogtest
> bash-3.00# zfs list -r slogtest
> NAME       USED  AVAIL  REFER  MOUNTPOINT
> slogtest  18.1M  25.4M  17.9M  /slogtest
> bash-3.00# zpool scrub slogtest
> bash-3.00# zpool status slogtest
>   pool: slogtest
>  state: ONLINE
>  scrub: scrub completed with 0 errors on Wed Sep  3 16:23:44 2008
> config:
>
>         NAME              STATE     READ WRITE CKSUM
>         slogtest          ONLINE       0     0     0
>           mirror          ONLINE       0     0     0
>             /usr/vdev/d0  ONLINE       0     0     0
>             /usr/vdev/d1  ONLINE       0     0     0
>         logs              ONLINE       0     0     0
>           /usr/vdev/slog  ONLINE       0     0     0
>
> errors: No known data errors
> bash-3.00# 
>
> I'm not sure this will always work, because there probably wasn't
> anything in the slog when I wiped it.  But I guess it's better than
> ``restore your pool from backup'' because of the pedantry of some
> wallpaper tool and brittle windows-registry-style binary config files.
>   

There are a number of fixes in the works to allow more options for
dealing with slogs.
http://bugs.opensolaris.org/search.do?process=1&type=&sortBy=relevance&bugStatus=&perPage=50&bugId=&keyword=&textSearch=slog+fault&category=kernel&subcategory=zfs&since=
If you can think of a new wrinkle, please file a bug.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to