Am 17.06.2013 19:16, schrieb Digimer:
On 06/17/2013 12:30 PM, Elmar Marschke wrote:
Am 17.06.2013 15:59, schrieb Digimer:
On 06/17/2013 09:53 AM, andreas graeper wrote:
hi,
i will not have a stonith-device. i can test for a day a 'expert power
control 8212', but in the end i will stay withou
On 06/17/2013 12:30 PM, Elmar Marschke wrote:
Am 17.06.2013 15:59, schrieb Digimer:
On 06/17/2013 09:53 AM, andreas graeper wrote:
hi,
i will not have a stonith-device. i can test for a day a 'expert power
control 8212', but in the end i will stay without.
This is an extremely flawed approac
Am 17.06.2013 15:59, schrieb Digimer:
On 06/17/2013 09:53 AM, andreas graeper wrote:
hi,
i will not have a stonith-device. i can test for a day a 'expert power
control 8212', but in the end i will stay without.
This is an extremely flawed approach. Clustering with shared storage and
without s
If you look in your logs when you try to connect the two nodes, you will
likely see a message like "split-brain detected, dropping connection".
This is the result of not using fencing as you created a condition where
both nodes went StandAlone and Primary.
To prevent this, you need to setup pa
On 06/17/2013 09:53 AM, andreas graeper wrote:
hi,
i will not have a stonith-device. i can test for a day a 'expert power
control 8212', but in the end i will stay without.
This is an extremely flawed approach. Clustering with shared storage and
without stonith will certainly cause data loss o
My guess is you don't have (working) fencing/stonith? Can you pastebin
your 'pcs config show' please? Also, 'drbdadm dump' please.
digimer
On 06/17/2013 09:32 AM, andreas graeper wrote:
hi,
i tried as i found in tutorial to kill -9 corosync on active node (n1).
but the other node (n2)
failed t
little error: n2 failed to promote drbd !
when i try to `drbdadm connect r0` on both nodes, it looks to me that the
connection state can change from Standalone to WFConnection
iff the other node is currently Standalone. WFConnection on both nodes does
not meet at the same time.
thanks
andreas
hi,
i tried as i found in tutorial to kill -9 corosync on active node (n1). but
the other node (n2)
failed to demote drbd. after corosync start on n1, n2:drbd was left
unmanaged.
but /proc/drbd on both nodes looked good: connected and uptodate.
how in such situation a resource can get managed agai