I fixed it.
Reason was in exportfs resource monitor. On startup it check was it already
started:
# "grep -z" matches across newlines, which is necessary as
# exportfs output wraps lines for long export directory names
exportfs | grep -zqs
"${OCF_RESKEY_directory}[[:space:]]*
Also I've seen that resources started in parralel:
Jun 22 16:18:49 msk-nfs-gw01 pengine: [1907]: notice: LogActions: Start
ping:0#011(msk-nfs-gw01)
Jun 22 16:18:49 msk-nfs-gw01 pengine: [1907]: notice: LogActions: Start
ping:1#011(msk-nfs-gw02)
Jun 22 16:18:49 msk-nfs-gw01 pengine: [1907]: notice:
I created bug in bugzilla:
http://developerbugs.linux-foundation.org/show_bug.cgi?id=2609
It seems all is like I thought, Filesystem resource is returned success and
pacemaker continue to start resources after it, but it have not mounted fs
yet and exportfs failed on export operation.
In logs i ca
On Tue, Jun 21, 2011 at 08:30:39PM +0200, Pawel Warowny wrote:
> Dnia Tue, 21 Jun 2011 16:23:07 +0200
> Dejan Muhamedagic napisał(a):
>
> Hi
>
> Sorry to bother in this thread, but:
>
> > If you need to do so (there's actually start-delay, but it
> > should be deprecated),
>
> I use start-dela
Dnia Tue, 21 Jun 2011 16:23:07 +0200
Dejan Muhamedagic napisał(a):
Hi
Sorry to bother in this thread, but:
> If you need to do so (there's actually start-delay, but it
> should be deprecated),
I use start-delay for starting kvm virtualized guest one after another.
If they all start at once and
Hi Dejan,
21.06.2011 17:46, Dejan Muhamedagic wrote:
> Hi Vladislav,
>
> On Tue, Jun 21, 2011 at 05:38:21PM +0300, Vladislav Bogdanov wrote:
>> 21.06.2011 17:23, Dejan Muhamedagic wrote:
>>> On Tue, Jun 21, 2011 at 06:10:16PM +0400, Aleksander Malaev wrote:
How can I check this?
If I do
Hi,
21.06.2011 17:59, Aleksander Malaev wrote:
> I'm not sure that Filesystem resource causes this behaviour. I'm doing
> some tests now and taking logs.
> I think it may be related to res-nfs group. Now I founded that portmap
> is started by upstart before pacemaker and may be it is the reason of
I'm not sure that Filesystem resource causes this behaviour. I'm doing some
tests now and taking logs.
I think it may be related to res-nfs group. Now I founded that portmap is
started by upstart before pacemaker and may be it is the reason of fail.
2011/6/21 Dejan Muhamedagic
> Hi Vladislav,
>
Hi Vladislav,
On Tue, Jun 21, 2011 at 05:38:21PM +0300, Vladislav Bogdanov wrote:
> 21.06.2011 17:23, Dejan Muhamedagic wrote:
> > On Tue, Jun 21, 2011 at 06:10:16PM +0400, Aleksander Malaev wrote:
> >> How can I check this?
> >> If I don't add this exportfs resource then cluster is become the ful
21.06.2011 17:23, Dejan Muhamedagic wrote:
> On Tue, Jun 21, 2011 at 06:10:16PM +0400, Aleksander Malaev wrote:
>> How can I check this?
>> If I don't add this exportfs resource then cluster is become the fully
>> operational - all mounts are accesible and fail-over between nodes is
>> working as i
Well but I have the following order:
res-fs -> res-nfs(this is group of nfs-kernel-server and portmap) ->
res-share
Is it wrong? and I need to add another one order constraint?
May be something going wrong when res-nfs resource is coming up?
21 июня 2011 г. 18:18 пользователь Serge Dubrouski нап
On Tue, Jun 21, 2011 at 06:10:16PM +0400, Aleksander Malaev wrote:
> How can I check this?
> If I don't add this exportfs resource then cluster is become the fully
> operational - all mounts are accesible and fail-over between nodes is
> working as it should. May be I need to add some sort of delay
2011/6/21 Aleksander Malaev
> Sure, I'm using order constraint.
> But it seems that it doesn't check monitor of the previous started
> resource.
>
Seems like you don't have an order constraint that would tie clone-share to
clone-fs making it to start sharing after mounting.
>
> 2011/6/21 Dejan
How can I check this?
If I don't add this exportfs resource then cluster is become the fully
operational - all mounts are accesible and fail-over between nodes is
working as it should. May be I need to add some sort of delay between this
resources?
2011/6/21 Dejan Muhamedagic
> On Tue, Jun 21, 2
On Tue, Jun 21, 2011 at 05:56:40PM +0400, Aleksander Malaev wrote:
> Sure, I'm using order constraint.
> But it seems that it doesn't check monitor of the previous started resource.
It doesn't need to check monitor. The previous resource, if
started, must be fully operational. If it's not, then th
Sure, I'm using order constraint.
But it seems that it doesn't check monitor of the previous started resource.
2011/6/21 Dejan Muhamedagic
> Hi,
>
> On Mon, Jun 20, 2011 at 11:40:04PM +0400, Александр Малаев wrote:
> > Hello,
> >
> > I have configured pacemaker+ocfs2 cluster with shared storage
Hi,
On Mon, Jun 20, 2011 at 11:40:04PM +0400, Александр Малаев wrote:
> Hello,
>
> I have configured pacemaker+ocfs2 cluster with shared storage connected by
> FC.
> Now I need to setup NFS export in Active/Active mode and I added all needed
> resources and wrote the order of starting.
> But then
Hello,
I have configured pacemaker+ocfs2 cluster with shared storage connected by
FC.
Now I need to setup NFS export in Active/Active mode and I added all needed
resources and wrote the order of starting.
But then node is starting after reboot I got race condition between
Filesystem resource and e
18 matches
Mail list logo