I think there are at least two separate issues here.

The first is that ZFS doesn't support multiple hosts accessing the same pool. 
That's simply a matter of telling people. UFS doesn't support multiple hosts, 
but it doesn't have any special features to prevent administrators from 
*trying* it. They'll "just" corrupt their filesystem.

The second is that ZFS remembers pools and automatically imports them at boot 
time. This is a bigger problem, because it means that if you create a pool on 
host A, shut down host A, import the pool to host B, and then boot host A, your 
pool is automatically destroyed.

The hostid solution that VxVM uses would catch this second problem, because 
when A came up after its reboot, it would find that -- even though it had 
created the pool -- it was not the last machine to access it, and could refuse 
to automatically mount it. If the administrator really wanted it mounted, they 
could force the issue. Relying on the administrator to know that they have to 
remove a file (the 'zpool cache') before they let the machine come up out of 
single-user mode seems the wrong approach to me. ("By default, we'll shoot you 
in the foot, but we'll give you a way to unload the gun if you're fast enough 
and if you remember.")

The hostid approach seems better to me than modifying the semantics of "force." 
I honestly don't think the problem is administrators who don't know what 
they're doing; I think the problem is that our defaults are wrong in the case 
of shared storage.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to