On 12/12/2014 03:22 AM, Hauke Bruno Wollentin wrote:
Hi Hauke,

personally I won't use DRBD for a case like that because of the _missing_
replication here.

Imho your idea will work, but it would be easier to manage to use some kind of
file synchronisation like rsync, unison etc. when the cold standby node comes
up.

Agreed. I would say that cold standby isn't high-availability, so I don't think any HA software would be the right tool for the job.

If you also (separately) do a full backup of the main server for data backup purposes, more frequently than every two weeks, I would recommend rsync'ing the standby server from the backups. That gives you a more recent copy of the data in case you have to failover when the main server has completely died, and it keeps the additional sync traffic off the production server.

-- Ken Gaillot

---
original message
timestamp: Wednesday, December 10, 2014 09:17:31 PM
from: Hauke Homburg <hhomb...@w3-creative.de>
to: pacemaker@oss.clusterlabs.org
cc:
subject: [Pacemaker] best Way to build a 2 Node Cluster with cold Standby ?
message id: <5488aa5b.4070...@w3-creative.de>

Hello,

I want to build a 2 Node KVM Cluster with the folling Features:

1 Node ist the Primary Node for some Virtual Machines with Linux, and 1
Node i want to install as second KVM Server too. With the same virtual
Machines in DRBD Devices. I want to boot the second Node every 2 Weeks
to sync the Datta and then to shutdown. So i want to have in fail of the
first Node a Backup Server.

What the Best Way to do this? I think i install both Nodes with the DRBD
and switch the Primary to Master, the second Machine to the DRBD Slave.
Does the DRBD become problems when i shutdown the slave Device for so a
long time?

greetings

Hauke


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to