Wouldn't it be more likely ? :
group res_share0 lvm_share0 fs_web *ip_share0 export_share0 *
instead of
group res_share0 lvm_share0 fs_web *export_share0 ip_share0*


Then, for me, what you want is as simple as :

primitive nfs-client ocf:heartbeat:Filesystem \
        params device="10.0.0.210:/share_fs" directory="/mount_fs"
fstype="nfs" \
        op start interval="0s" timeout="60s" \
        op stop interval="0s" timeout="60s" \
        op monitor interval="40s" timeout="40s" \
        meta failure-timeout="62s"

clone cl-nfs-client nfs-client \
        meta clone-max="1" clone-node-max="1"
or
[
clone cl-nfs-client nfs-client \
        meta clone-max="2" clone-node-max="1"

and put into the group :
group res_share0 lvm_share0 fs_web ip_share0 export_share0 nfs-client

if you want the server also mount its own export
]

and finally :

order o-stop-nfs-client inf:  ms_share0:promote cl-nfs-client:stop



2012/9/21 Timur I. Bakeyev <[email protected]>

> Hi all!
>
> I've configured HA NFS storage (almost) according to the LinBIT HOWTO and,
> in general it works OK for my client nodes, which mount NFS share /web as
> /var/www and use it as document root for NGINX.
>
> To get maximum performance out of the configuration two NFS exports(/web
> and /img) are configured asymmetrically, in an
> Active/Passive+Passive/Active manner, so each of the two nodes is an active
> server for one of the shares.
>
> At this point I got stuck, as I want to use both of the NFS servers as
> additional NGINX server nodes, so on each of them has to mount exported
> /web and, when it is mounted - start NGINX.
>
> So, taking for simplicity one Active/Passive pair - on both nodes I need to
> mount /web, but on active node I first need to start DRBD and nfsserver and
> on a passive node just need to wait until NFS share got available. After
> that point both nodes can start NGINX.
>
> And, honestly, I'm confused how to configure such a mount resource. It
> seems it has to be master/slave resource, but I'm not so good with
> Pacemaker to be able to express that in configuration terms :) So any help
> would be appreciated!
>
> Here is my config:
>
> node ad24
> node ad35
> primitive export_share0 ocf:heartbeat:exportfs \
>         params directory="/web" clientspec="10.0.0.0/24"
> options="rw,async,no_subtree_check,no_root_squash" fsid="10"
> rmtab_backup=".nfs/rmtab" unlock_on_stop="true" \
>         op monitor interval="10s" timeout="30s" \
>         op start interval="0" timeout="40s" \
>         op stop interval="0" timeout="40s"
> primitive fs_web ocf:heartbeat:Filesystem \
>         params device="/dev/share0/web" directory="/web" fstype="xfs" \
>         op monitor interval="20s" timeout="40s" \
>         op start interval="0" timeout="60s" \
>         op stop interval="0" timeout="60s" \
>         meta is-managed="true"
> primitive ip_share0 ocf:heartbeat:IPaddr2 \
>         params ip="10.0.0.210" cidr_netmask="24" nic="bond0" \
>         op monitor interval="5s" timeout="20s"
> primitive lvm_share0 ocf:heartbeat:LVM \
>         params volgrpname="share0" \
>         op start interval="0" timeout="30s" \
>         op stop interval="0" timeout="30s"
> primitive share0 ocf:linbit:drbd \
>         params drbd_resource="share0" \
>         op monitor interval="29s" role="Master" timeout="40s" \
>         op monitor interval="31s" role="Slave" timeout="40s" \
>         op start interval="0" timeout="240s" \
>         op stop interval="0" timeout="100s"
> group res_share0 lvm_share0 fs_web export_share0 ip_share0
> ms ms_share0 share0 \
>         meta master-max="1" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true"
> location share0_on_ad24 ms_share0 \
>         rule $id="share0_on_ad24-rule" inf: #uname eq ad24
> colocation use_share0 inf: res_share0 ms_share0:Master
> order activate_share0 inf: ms_share0:promote res_share0:start
> property $id="cib-bootstrap-options" \
>         dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
>         cluster-infrastructure="openais" \
>         expected-quorum-votes="2" \
>         no-quorum-policy="ignore" \
>         stonith-enabled="false" \
>         last-lrm-refresh="1343967106"
> rsc_defaults $id="rsc-options" \
>         resource-stickiness="200"
>
> With best regards,
> Timur Bakeyev.
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to