On Feb 6, 2010, at 12:25 AM, Kirill Stasenkov wrote:

> Thanks :) It's realy obvious. But i try to understand inner process in 
> pacemaker without reading code I'm not dev but i can. I looked through all 
> materials on clusterlabs(Configuration explained, Fencing and etc) but 
> couldn't find any clue, why sometimes on start-up, in cluster with more than 
> 2-3 nodes(no-quorum-policy=stop and no stonith) 4 node can suddenly shutdown 
> connection to crm?

It doesn't.
The crm only shuts down when someone tells it to.

> I think it's all about DC. When new nodes connects it have to became DC to 
> push cib --sync-all globaly, am i right? But it losses it's quorum and start 
> make many errors it can be seen in log i've attached. So current DC send 
> shutdown to new node to save cib integrity.

No. Thats not happening.
There would be _very_ noisy ERROR messages if that was the case. 

> By this time cib half-updated with new node. 
> There is no one who can affect corosync behaviour.  
> 
> Can you advise what to read about inner pacemaker mechanism(PE, TE, CIB), how 
> they can be tunned. And help me with advise on my question.
> Thank you in advance i am appreciate any help :)
> 
> On 05.02.2010, at 16:09, Andrew Beekhof wrote:
> 
>> On Fri, Feb 5, 2010 at 11:35 AM, Kirill Stasenkov
>> <kirill.stasen...@gmail.com> wrote:
>>> Hi i'm new with ha and test some product where pacemaker with corosync 
>>> available.
>>> Installing next node gives this output to log. Node registered in cluster 
>>> but corosyc shutdown unexpectedly with "shutdown by sysadmin"
>>> Log in attach. Can you advise me anything?
>> 
>> Looks like someone told corosync to shut down...
> 

-- Andrew




_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Reply via email to