On Tue, Feb 24, 2015 at 2:07 AM, Andrew Beekhof wrote:
>
> > I have a 3-node cluster where node1 and node2 are running
> corosync+pacemaker and node3 is running corosync only (for quorum).
> Corosync 2.3.3, pacemaker 1.1.10. Everything worked fine the first couple
> of days.
> >
> > Once upon a t
Hello.
I'm using corosync + pacemaker in UDPu mode in a geo-distributed cluster
(with large latency and some non-zero packet loss).
What are robust settings for timeouts in such situations? E.g. I suspect
that totem.token ought to be increased from 1000 to 5000 ms. Is it a good
value? Are there o
Could you please give a hint: how to use fencing in case the nodes are all
in different geo-distributed datacenters? How people do that? Because there
could be a network disconnection between datacenters, and we have no chance
to send a stonith signal somewhere.
On Wednesday, February 4, 2015, And
Hello.
I have a 3-node cluster where node1 and node2 are running
corosync+pacemaker and node3 is running corosync only (for quorum).
Corosync 2.3.3, pacemaker 1.1.10. Everything worked fine the first couple
of days.
Once upon a time I discovered the following situation: node2 thinks that
both nod
could be corosync_votequorum (but not empty).
It would help to install and launch corosync instantly by novices.
On Fri, Jan 16, 2015 at 7:31 PM, Jan Friesse wrote:
> Dmitry Koterov napsal(a):
>
>>
>>> such messages (for now). But, anyway, DNS names in ringX_addr seem not
gt;>> helpful in corosync.
> >>
> >> that's weird. Because as long as DNS is resolved, corosync works only
> >> with IP. This means, code path is exactly same with IP or with DNS. Do
> >> you have logs from corosync?
> >>
> >>
gt; >>> error message would be very helpful in such case.
> >>>
> >>
> >> This sounds weird. Are you sure that DNS names really maps to correct IP
> >> address? In logs there should be something like "adding new UDPU member
> >> {I
Sorry!
Pacemaker 1.1.10
Corosync 2.3.30
BTW I removed quorum.two_node:1 from corosync.conf, and it helped! Now
isolated node stops its services in 3-node cluster. Was it the right
solution?
On Wednesday, January 14, 2015, Andrew Beekhof wrote:
>
> > On 14 Jan 2015, at 12:06 am, Dmitr
> > Then I see that, although node2 clearly knows it's isolated (it doesn't
> see other 2 nodes and does not have quorum)
>
> we don't know that - there are several algorithms for calculating quorum
> and the information isn't included in your output.
> are you using cman, or corosync underneath pa
>
> 1. install the resource related packages on node3 even though you never
> want
> them to run there. This will allow the resource-agents to verify the
> resource
> is in fact inactive.
Thanks, your advise helped: I installed all the services at node3 as well
(including DRBD, but without it con
Hello.
I have 3-node cluster managed by corosync+pacemaker+crm. Node1 and Node2
are DRBD master-slave, also they have a number of other services installed
(postgresql, nginx, ...). Node3 is just a corosync node (for quorum), no
DRBD/postgresql/... are installed at it, only corosync+pacemaker.
But
d in "adjust" mode, but NOT when
"adjust-with-progress" is active. So if one uses "adjust-with-progress",
drbdadm silently skips failed steps and continues with the next ones, while
"adjust" fails on a first failed step.
On Sat, Jan 3, 2015 at 10:27 PM, Vl
Hello.
Ubuntu 14.04, corosync 2.3.3, pacemaker 1.1.10. The cluster consists of 2
nodes (node1 and node2), when I run "crm node standby node2" and then, in a
minute, "crm node online node2", DRBD secondary on node2 does not start.
Logs say that "drbdadm -c /etc/drbd.conf check-resize vlv" fails wit
e very helpful in such case.
>>
>
> This sounds weird. Are you sure that DNS names really maps to correct IP
> address? In logs there should be something like "adding new UDPU member
> {IP_ADDRESS}".
>
> Regards,
> Honza
>
>
>> On Tuesday, Dece
, Daniel Dehennin
wrote:
> Dmitry Koterov > writes:
>
> > Oh, seems I've found the solution! At least two mistakes was in my
> > corosync.conf (BTW logs did not say about any errors, so my conclusion is
> > based on my experiments only).
> >
> > 1. node
=
On Tue, Dec 30, 2014 at 12:34 PM, Dmitry Koterov
wrote:
> On Mon, Dec 29, 2014 at 1:50 PM, Dejan Muhamedagic
>> wrote:
>> >> On Mon, Dec 29, 2014 at 06:11:49AM +0300, Dmitry Koterov wrote:
>> >> Hello.
>> >>
>> >> I have a geogr
>
> On Mon, Dec 29, 2014 at 1:50 PM, Dejan Muhamedagic
> wrote:
> >> On Mon, Dec 29, 2014 at 06:11:49AM +0300, Dmitry Koterov wrote:
> >> Hello.
> >>
> >> I have a geographically distributed cluster, all machines have public IP
> >> addr
Hello.
I have a geographically distributed cluster, all machines have public IP
addresses. No virtual IP subnet exists, so no multicast is available.
I thought that UDPu transport can work in such environment, doesn't it?
To test everything in advance, I've set up a corosync+pacemaker on Ubuntu
18 matches
Mail list logo