Ok- I think perhaps I'm failing to explain myself.

I want to know if there is a way for a second node- connected to a set of 
shared disks- to keep its zpool.cache up to date _without_ actually importing 
the ZFS pool.

As I understand it- keeping the zpool up to date on the second node would 
provide additional protection should the slog fail at the same time my primary 
head failed (it should also improve import times if what I've read is true).

I understand that importing the disks to the second node will update the cache 
file- but by that time it may be too late. I'd like to update the cache file 
_before_ then. I see no reason why the second node couldn't scan the disks 
being used by the first node and then update it's zpool.cache.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to