On 15 Aug 2014, at 5:49 am, Steve Feehan wrote:
> On Thu, Aug 14, 2014 at 12:38:00PM +1000, Andrew Beekhof wrote:
>>
>> On 14 Aug 2014, at 12:33 am, Steve Feehan wrote:
>>
>
>> Is it a problem that several seconds could go by between the node going
>> offline and the notification arriving?
On 15 Aug 2014, at 4:02 am, Andrei Borzenkov wrote:
> В Thu, 14 Aug 2014 12:45:27 +1000
> Andrew Beekhof пишет:
>
>>>
>>> It statically assigns priorities to cluster nodes. I need to
>>> dynamically assign higher priority (lower delay) to a node that is
>>> currently running application to en
On Thu, Aug 14, 2014 at 12:38:00PM +1000, Andrew Beekhof wrote:
>
> On 14 Aug 2014, at 12:33 am, Steve Feehan wrote:
>
> Is it a problem that several seconds could go by between the node going
> offline and the notification arriving?
> I would usually expect the answer to be yes.
When a node
В Thu, 14 Aug 2014 12:45:27 +1000
Andrew Beekhof пишет:
> >
> > It statically assigns priorities to cluster nodes. I need to
> > dynamically assign higher priority (lower delay) to a node that is
> > currently running application to ensure that application survives. It
> > was relatively easy in
ncomplete=10, Source=/var/lib/pacemaker/pengine/pe-warn-7.bz2): Stopped
Jul 03 14:10:51 [2701] sip2 crmd: notice:
too_many_st_failures: No devices found in cluster to fence
sip1, giving up
Jul 03 14:10:54 [2697] sip2 stonith-ng: info: stonith_command:
Processed st_query reply
emmanuel,
tnx. But how to know why fancing stop working?
br
miha
Dne 8/14/2014 2:35 PM, piše emmanuel segura:
Node sip2: UNCLEAN (offline) is unclean because the cluster fencing
failed to complete the operation
2014-08-14 14:13 GMT+02:00 Miha :
hi.
another thing.
On node I pcs is running:
Node sip2: UNCLEAN (offline) is unclean because the cluster fencing
failed to complete the operation
2014-08-14 14:13 GMT+02:00 Miha :
> hi.
>
> another thing.
>
> On node I pcs is running:
> [root@sip1 ~]# pcs status
> Cluster name: sipproxy
> Last updated: Thu Aug 14 14:13:37 2014
> Last change:
hi.
another thing.
On node I pcs is running:
[root@sip1 ~]# pcs status
Cluster name: sipproxy
Last updated: Thu Aug 14 14:13:37 2014
Last change: Sat Feb 1 20:10:48 2014 via crm_attribute on sip1
Stack: cman
Current DC: sip1 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configu
Hi emmanuel,
i think so, what is the best way to check?
Sorry for my noob question, I have confiured this 6 mouths ago and
everything was working fine till now. Now I need to find out what realy
heppend beffor I do something stupid.
tnx
Dne 8/14/2014 1:58 PM, piše emmanuel segura:
are yo
are you sure your cluster fencing is working?
2014-08-14 13:40 GMT+02:00 Miha :
> Hi,
>
> I noticed today that I am having some problem with cluster. I noticed the
> master server is offilne but still virutal ip is assigned to it and all
> services are running properly (for production).
>
> If I d
Hi,
I noticed today that I am having some problem with cluster. I noticed
the master server is offilne but still virutal ip is assigned to it and
all services are running properly (for production).
If I do this I am getting this notifications:
[root@sip2 cluster]# pcs status
Error: cluster i
14.08.2014 10:35, Andrew Beekhof wrote:
...
>>> The load from the crmd is mostly from talking to the lrmd, which is
>>> dependant on resource placement rather than being (or not being) the DC.
>>
>> I've seen the different picture with 1024 unique clone instances. crmd's
>> CPU load on DC is muc
On 14 Aug 2014, at 2:58 pm, Alex Samad - Yieldbroker
wrote:
> Hi
>
> pcs status
> Online: [ alcdmz1 gsdmz1 ]
>
> Full list of resources:
>
> dnsip-a(ocf::yb:namedVIP): Started alcdmz1
> dnsip-b(ocf::yb:namedVIP): Started gsdmz1
> squidip-a (ocf::yb:squidVIP):
On 14 Aug 2014, at 3:28 pm, Vladislav Bogdanov wrote:
> 14.08.2014 05:24, Andrew Beekhof wrote:
>>
>> On 14 Aug 2014, at 12:05 am, Lars Ellenberg
>> wrote:
>>
>>> On Wed, Aug 13, 2014 at 10:33:55AM +1000, Andrew Beekhof wrote:
On 13 Aug 2014, at 2:02 am, Cédric Dufour - Idiap Research I
14 matches
Mail list logo