On Jan 26, 2007, at 13:52, Marion Hakanson wrote:

[EMAIL PROTECTED] said:
. . .
realize that the pool is now in use by the other host. That leads to two
systems using the same zpool which is not nice.

Is there any solution to this problem, or do I have to get Sun Cluster 3.2 if I want to serve same zpools from many hosts? We may try Sun Cluster anyway,
but I'd like to know if this can be solved without it.

Perhaps I'm stating the obvious, but here goes:

You could use SAN zoning of the affected LUN's to keep multiple hosts
from seeing the zpool. When failover time comes, you change the zoning
to make the LUN's visible to the new host, then import.  When the old
host reboots, it won't find any zpool.  Better safe than sorry....

Actually if you use the Sun Leadville stack you can dynamically take the ports offline with cfgadm, and if you don't want everything autoconfigured on boot you might want to flip the "manual_configuration_only" bit in the fp.conf. You
can do this on the wwpn with something like:
# cfgadm -c unconfigure c3::510000f010fd92a1

Also you could simply LUN mask them in the fp.conf to prevent the LUNs
from being seen. We put this in last year due to an issue with replicated VxVM
volumes that could corrupt a disk group since they would have the same
signature on them. Take a look at the "pwwn-lun-blacklist" towards the bottom of the configuration file for an example .. or take a look at fp(7d) man page.

If you're careful you could maintain 2 fp.conf files and flip back and forth between different configurations of visible FC storage. Works nicely to run a standby server as a QA box during the day, and flip to a production server on a reboot -
particularly if you've got an ABE setup.  (SAN-boot if you're brave)

---
.je
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to