----- Original Message ----- > Hi there, > > we are using exportfs for building datastores for VMWare. > > After migrating 1 NFS resource to another node (after sucessful fencing > e.g.), > VMWare doesn't see that datastore until I manually fire _exportfs -f_ on the > new cluster node. > > I tried to modify the resource agent itself like: > > 247 restore_rmtab > 248 > 249 ocf_log info "File system exported" > 250 > 251 sleep 5 #added > 252 > 253 ocf_run exportfs -f || exit $OCF_ERR_GENERIC #added > 254 > 255 ocf_log info "kernel table flushed" #added > 256 > 257 return $OCF_SUCCESS > > but this didn't do the trick. > > Does anyone has an idea how to resolve that issue?
HA NFS is tricky and requires a very specific resource startup/shutdown order to work correctly. Here's some information about use cases I test. At this point, the active/passive use case is well understood. If you are able to, I would recommend modeling deployments using the A/P use case guidelines. HA NFS Active/Passive: https://github.com/davidvossel/phd/blob/master/doc/presentations/nfs-ap-overview.pdf?raw=true https://github.com/davidvossel/phd/blob/master/scenarios/nfs-active-passive.scenario HA NFS Active/Active: https://github.com/davidvossel/phd/blob/master/doc/presentations/nfs-aa-overview.pdf?raw=true https://github.com/davidvossel/phd/blob/master/scenarios/nfs-active-active.scenario -- Vossel > > Cheers, > Hauke > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org