W dniu 23.12.2013 12:43, Nikita Staroverov pisze:
But it was causing trouble, trying to promote one of DRBDs and then
starting XEN. After it failed (because second DRBD was secondary) it
tried to stop both DRBDs and again. Also once it happened that XEN had
still device open and XEN was refusi
But it was causing trouble, trying to promote one of DRBDs and then
starting XEN. After it failed (because second DRBD was secondary) it
tried to stop both DRBDs and again. Also once it happened that XEN had
still device open and XEN was refusing to stop that VM.
So a mess, stonith, and so
W dniu 21.12.2013 17:23, Никита Староверов pisze:
Hello. I use drbd 8.4 that has possibility
of defining drbd minors
into one resource.
Properly defined order and colocation rules should also work with drbd 8.3.
Hello,
Unfortunately I have 8.3.11. Altough I updated OCF script to have it
Hello,
I have another question in the same topic. What if some resource
(Virtual Machine) requires two DRBDs?
I tried with group and it caused UNCLEAN of a node and it was killed.
So what is proper way to do that? By grouping (if yes grouping what -
primitives or master-slave?) Or maybe just
Hello,
I have this part above missing. So there was no patch applied. I will
do it manually by adding this section. Will that be enough? I mean
adding:
# avoid too tight pacemaker driven "recovery" loop,
# if promotion keeps failing for some reason
if [[ $rc != 0 ]]
W dniu 17.12.2013 14:57, Nikita Staroverov pisze:
677
678 # avoid too tight pacemaker driven "recovery" loop,
679 # if promotion keeps failing for some reason
680 if [[ $rc != 0 ]] && (( $SECONDS < 15 )) ; then
681 delay=$(( 15 - SECONDS ))
682 ocf_log warn "promotion failed; sleep $delay # to pr
17.12.2013 17:29, Michał Margula пишет:
W dniu 17.12.2013 12:22, Nikita Staroverov pisze:
I forgot some important thing. You must use last version of drbd, due to
ocf agent bug that can lead to drbd promote fail with error code 17.
AFAIK, you nead drbd 8.4.4 at least.
Not good, because I have
W dniu 17.12.2013 12:22, Nikita Staroverov pisze:
I forgot some important thing. You must use last version of drbd, due to
ocf agent bug that can lead to drbd promote fail with error code 17.
AFAIK, you nead drbd 8.4.4 at least.
Not good, because I have 8.3.11, this is version which is in Debia
No. Pacemaker with my configuration starts ms drbd in slave/slave
configuration. Due to ordering rule pacemaker promotes drbd before VM
starts.
OK. Now it is clear. Thank you once again!
I forgot some important thing. You must use last version of drbd, due to
ocf agent bug that can lead to
W dniu 17.12.2013 10:08, Nikita Staroverov pisze:
So here difference is target-role dropped? I had "Started" here.
Yes, there is difference. drdb primitive completely managed by
multistate resource. Target-Role in drbd primitive causes troubles.
I think this is the reason of your problem with re
17.12.2013 12:49, Michał Margula пишет:
Hello!
Hello!
Let me go trough it to see if I understand correctly:
W dniu 17.12.2013 08:57, Nikita Staroverov pisze:
primitive drbd_YOURVM ocf:linbit:drbd \
params drbd_resource="YOURVM" \
op monitor interval="29s" role="Master" \
op monitor interval
Hello!
Let me go trough it to see if I understand correctly:
W dniu 17.12.2013 08:57, Nikita Staroverov pisze:
primitive drbd_YOURVM ocf:linbit:drbd \
params drbd_resource="YOURVM" \
op monitor interval="29s" role="Master" \
op monitor interval="31s" role="Slave"
So what I am doing wrong? Now our cluster is offline (corosync is
stopped) because it was promoting, demoting, starting, stopping DRBD
services and finnaly such node got declared as Unclean and then
shooting started.
Funny thing is that if I start drbd manually (by /etc/init.d/drbd
start)
Hello,
Thanks to this mailing list I did some changes to our cluster
configuration because it was having some trouble (declaring other node
unclean quite often).
I did change mode of bond0 interface between nodes to mode "1"
(active-backup). Also previously I had two drbd resources - r1 and
14 matches
Mail list logo