Hello everyone For solve this problem i think is better use this parameter ======================================================= use *OCF_RESKEY_nfs_shared_infodir*= Directory to store nfs server related infor- mation. *The nfsserver resource agent will save nfs related information in this specific direc- tory. And this directory must be able to fail-over before nfsserver itself*. ========================================================
I don't know why, but i don't like use link for make this work 2012/1/16 Andrew Martin <amar...@xes-inc.com> > Hi Dennis, > > Have you also added /var/lib/nfs to the shared DRBD resource? This is an > important step to ensure that data about currently-open files and mount > information is transferred to the other node during failover. See the end > of Step 4: > > http://www.howtoforge.com/highly-available-nfs-server-using-drbd-and-heartbeat-on-debian-5.0-lenny > > Thanks, > > Andrew > > ------------------------------ > *From: *"emmanuel segura" <emi2f...@gmail.com> > *To: *"The Pacemaker cluster resource manager" < > pacemaker@oss.clusterlabs.org> > *Sent: *Monday, January 16, 2012 6:06:54 AM > *Subject: *Re: [Pacemaker] how does the exportfs resource agent work? > > > you should check how you mount the nfs cluster share from your client > > for example > > mount -o hard,rw,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -t nfs > your_virtual_ip:/your_cluster_fs_share /mountpoint > > man nfs > > ===================================================== > timeo=n The value in tenths of a second before sending the > first retransmission after an RPC timeout. The > default value is 7 tenths of a second. After the > first timeout, the timeout is doubled after each suc- > cessive timeout until a maximum timeout of 60 > seconds is reached or the enough retransmissions have > occured to cause a major timeout. Then, if the > filesystem is hard mounted, each new timeout cascade > restarts at twice the initial value of the > previous cascade, again doubling at each retransmission. > The maximum timeout is always 60 seconds. Better > overall performance may be achieved by increasing the > timeout when mounting on a busy network, to a slow > server, or through several routers or gateways. > > retrans=n The number of minor timeouts and retransmissions > that must occur before a major timeout occurs. The > default is 3 timeouts. When a major timeout occurs, > the file operation is either aborted or a "server > not responding" message is printed on the console. > ======================================================= > > 2012/1/16 Dennis Jacobfeuerborn <denni...@conversis.de> > >> What am I supposed to look for? >> >> Regards, >> Dennis >> >> >> On 01/16/2012 12:13 PM, emmanuel segura wrote: >> >>> I think man nfs it's can help you >>> >>> Try to look your nfs client options >>> >>> 2012/1/15 Dennis Jacobfeuerborn <denni...@conversis.de >>> <mailto:denni...@conversis.de>**> >>> >>> >>> Hi, >>> I'm trying to build a HA nfs system based on drbd and apart from the >>> nfs export everything is working fine. The problem is that when I >>> force >>> a failover things seem to work fine yet when I fail back to the >>> original system the clients freeze for a very long time. >>> >>> /mnt/tmp is the mountpoint on the client and I'm using the following >>> to >>> test access: >>> for i in `seq 1 2000`; do echo $i; ls /mnt/tmp; sleep 1; done >>> >>> on a failover the output look like this: >>> >>> ... >>> 47 >>> testfile testfile2 >>> 48 >>> testfile testfile2 >>> 49 >>> testfile testfile2 >>> 50 >>> testfile testfile2 >>> 51 >>> testfile testfile2 >>> 52 >>> ls: cannot open directory /mnt/tmp: Permission denied >>> 53 >>> ls: cannot open directory /mnt/tmp: Permission denied >>> 54 >>> <<< freeze of several minutes >>> >>> testfile testfile2 >>> 55 >>> testfile testfile2 >>> 56 >>> testfile testfile2 >>> ... >>> >>> The first question I have is how can I prevent the "Permission denied" >>> errors? If these occur on e.g. a mountpoint for MySQL for example this >>> will no doubt lead to problems with the database and that means the >>> storage isn't really redundant. >>> >>> The second question is how do I reduce the failover time? I tried >>> adding timeo=30 to the client mount options but that doesn't seem to >>> help. >>> >>> This is what my cib looks like: >>> >>> node storage1.dev >>> node storage2.dev >>> primitive p_drbd_nfs ocf:linbit:drbd \ >>> params drbd_resource="nfs" \ >>> op monitor interval="15" role="Master" \ >>> op monitor interval="30" role="Slave" >>> primitive p_exportfs_data ocf:heartbeat:exportfs \ >>> params fsid="1" directory="/mnt/data/export" >>> options="rw,no_root_squash" clientspec="*" \ >>> op monitor interval="30s" >>> primitive p_fs_data ocf:heartbeat:Filesystem \ >>> params device="/dev/drbd/by-res/nfs" directory="/mnt/data" >>> fstype="ext3" \ >>> op monitor interval="10s" >>> primitive p_ip_nfs ocf:heartbeat:IPaddr2 \ >>> params ip="192.168.2.190" cidr_netmask="24" \ >>> op monitor interval="30s" >>> group g_nfs p_fs_data p_exportfs_data p_ip_nfs >>> ms ms_drbd_nfs p_drbd_nfs \ >>> meta master-max="1" master-node-max="1" clone-max="2" >>> clone-node-max="1" notify="true" >>> colocation c_nfs_on_drbd inf: g_nfs ms_drbd_nfs:Master >>> order o_drbd_before_nfs inf: ms_drbd_nfs:promote g_nfs:start >>> property $id="cib-bootstrap-options" \ >>> dc-version="1.0.12-unknown" \ >>> cluster-infrastructure="__**openais" \ >>> >>> expected-quorum-votes="2" \ >>> stonith-enabled="false" \ >>> no-quorum-policy="ignore" >>> rsc_defaults $id="rsc-options" \ >>> resource-stickiness="200" >>> >>> Regards, >>> Dennis >>> >>> ______________________________**___________________ >>> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org >>> <mailto:Pacemaker@oss.**clusterlabs.org<Pacemaker@oss.clusterlabs.org> >>> > >>> >>> http://oss.clusterlabs.org/__**mailman/listinfo/pacemaker<http://oss.clusterlabs.org/__mailman/listinfo/pacemaker> >>> >>> >>> <http://oss.clusterlabs.org/**mailman/listinfo/pacemaker<http://oss.clusterlabs.org/mailman/listinfo/pacemaker> >>> > >>> >>> Project Home: http://www.clusterlabs.org >>> Getting started: >>> >>> http://www.clusterlabs.org/__**doc/Cluster_from_Scratch.pdf<http://www.clusterlabs.org/__doc/Cluster_from_Scratch.pdf> >>> >>> >>> <http://www.clusterlabs.org/**doc/Cluster_from_Scratch.pdf<http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf> >>> > >>> Bugs: http://bugs.clusterlabs.org >>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> >>> ______________________________**_________________ >>> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org >>> http://oss.clusterlabs.org/**mailman/listinfo/pacemaker<http://oss.clusterlabs.org/mailman/listinfo/pacemaker> >>> >>> Project Home: http://www.clusterlabs.org >>> Getting started: http://www.clusterlabs.org/** >>> doc/Cluster_from_Scratch.pdf<http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf> >>> Bugs: http://bugs.clusterlabs.org >>> >> >> >> ______________________________**_________________ >> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org >> http://oss.clusterlabs.org/**mailman/listinfo/pacemaker<http://oss.clusterlabs.org/mailman/listinfo/pacemaker> >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/** >> doc/Cluster_from_Scratch.pdf<http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf> >> Bugs: http://bugs.clusterlabs.org >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > > -- esta es mi vida e me la vivo hasta que dios quiera
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org