Maybe something interesting, the only survived node was node7, and it was the 
crm master

I'm also seein crm disabling watchdog, and also some "loop take too long" 
messages



(some migration logs from node2 to node1 before the maintenance)
Sep  3 10:36:29 m6kvm7 pve-ha-crm[16196]: service 'vm:992': state changed from 
'migrate' to 'started'  (node = m6kvm1)
Sep  3 10:36:29 m6kvm7 pve-ha-crm[16196]: service 'vm:993': state changed from 
'migrate' to 'started'  (node = m6kvm1)
Sep  3 10:36:29 m6kvm7 pve-ha-crm[16196]: service 'vm:997': state changed from 
'migrate' to 'started'  (node = m6kvm1)
....

Sep  3 10:40:41 m6kvm7 pve-ha-crm[16196]: node 'm6kvm2': state changed from 
'online' => 'unknown'
Sep  3 10:40:50 m6kvm7 pve-ha-crm[16196]: got unexpected error - error during 
cfs-locked 'domain-ha' operation: no quorum!
Sep  3 10:40:51 m6kvm7 pve-ha-lrm[16140]: loop take too long (87 seconds)
Sep  3 10:40:51 m6kvm7 pve-ha-crm[16196]: loop take too long (92 seconds)
Sep  3 10:40:51 m6kvm7 pve-ha-crm[16196]: lost lock 'ha_manager_lock - cfs lock 
update failed - Permission denied
Sep  3 10:40:51 m6kvm7 pve-ha-lrm[16140]: lost lock 'ha_agent_m6kvm7_lock - cfs 
lock update failed - Permission denied
Sep  3 10:40:56 m6kvm7 pve-ha-lrm[16140]: status change active => 
lost_agent_lock
Sep  3 10:40:56 m6kvm7 pve-ha-crm[16196]: status change master => 
lost_manager_lock
Sep  3 10:40:56 m6kvm7 pve-ha-crm[16196]: watchdog closed (disabled)
Sep  3 10:40:56 m6kvm7 pve-ha-crm[16196]: status change lost_manager_lock => 
wait_for_quorum



others nodes timing
--------------------

10:39:16 ->  node2 shutdown, leave coroync

10:40:25 -> other nodes rebooted by watchdog


----- Mail original -----
De: "aderumier" <aderum...@odiso.com>
À: "dietmar" <diet...@proxmox.com>
Cc: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>, 
"pve-devel" <pve-de...@pve.proxmox.com>
Envoyé: Dimanche 6 Septembre 2020 07:36:10
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown

>>But the pve logs look ok, and there is no indication 
>>that we stopped updating the watchdog. So why did the 
>>watchdog trigger? Maybe an IPMI bug? 

do you mean an ipmi bug on all 13 servers at the same time ? 
(I also have 2 supermicro servers in this cluster, but they use same ipmi 
watchdog driver. (ipmi_watchdog) 



I had same kind of with bug once (when stopping a server), on another cluster, 
6 months ago. 
This was without HA, but different version of corosync, and that time, I was 
really seeing quorum split in the corosync logs of the servers. 


I'll try to reproduce with a virtual cluster with 14 nodes (don't have enough 
hardware) 


Could I be a bug in proxmox HA code, where watchdog is not resetted by LRM 
anymore? 

----- Mail original ----- 
De: "dietmar" <diet...@proxmox.com> 
À: "aderumier" <aderum...@odiso.com> 
Cc: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>, 
"pve-devel" <pve-de...@pve.proxmox.com> 
Envoyé: Dimanche 6 Septembre 2020 06:21:55 
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown 

> >>So you are using ipmi hardware watchdog? 
> 
> yes, I'm using dell idrac ipmi card watchdog 

But the pve logs look ok, and there is no indication 
that we stopped updating the watchdog. So why did the 
watchdog trigger? Maybe an IPMI bug? 


_______________________________________________ 
pve-devel mailing list 
pve-devel@lists.proxmox.com 
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to