Hi Adrian, Dag,

thanks for the feedback!

Although I do understand what you're saying, Dag, I tend to believe that 
synching storage shouldn't be that much of an issue, given two prereqs of 
course:
- Direct connection via a decent network infra (point to point / direct connect 
over 10GEth should be enough?)
- Volumes and load should be manageable for the storage subsystem to 
synchronise.
Obviously, high IO load on such clustered storage will cause issues at the 
moment of a failover.

Since I am looking at rather "low IO" machines, I'd say that this risk is very 
low to non-existent.
Added the fact, that I can tell DRBD to only accept changes that have been 
fully written to the respective copy and then acknowledged.
Yes, that makes things possibly a bit slower, but it will make sure that 
changes will only be written to both storages when both sides have acknowledged.

Adrian, I have not yet looked at CLVM and GFS2, but definitely will be doing 
this!
Thanks for the info, I'll keep you posted ...

JK

-----Ursprüngliche Nachricht-----
Von: Adrian Sender [mailto:[email protected]] 
Gesendet: Montag, 28. November 2016 02:50
An: [email protected]; [email protected]
Betreff: Re: AW: 2node HA ACS Cluster with DRBD

I have setup and ran cloudstack using KVM, DRBD, CLVM primary/primary - I used 
to run the VMs on the storage nodes themselves (better IO). No need to run 
ISCSI / NFS exports that slow everything down.

As currently DRBD is limited to 2 nodes, I run multiple 2 node KVM clusters 
with DRBD.

I noticed much better IO using CLVM over the GFS2 file system.

Everything worked great and I ran this configuration for years. Never had 
corruption or data loss.

Regards,
Adrian Sender

---------- Original Message -----------
From: Dag Sonstebo <[email protected]>
To: "[email protected]" <[email protected]>, 
"[email protected]" <[email protected]>
Sent: Fri, 25 Nov 2016 09:32:42 +0000
Subject: Re: AW: 2node HA ACS Cluster with DRBD

> Hi Jeoroen,
> 
> Sorry I missed you yesterday [UTF-8?]–  meant to catch up after the 
> user group but had to run to catch my flight.
> 
> I think what you describe will work [UTF-8?]– however I have my 
> doubts it will fail over gracefully. For pacemaker to fail over 
> cleanly the failover has to be perfectly synched [UTF-8?]– i.e. all 
> packets have to be written in both primary storage pools, traffic 
> ideally quiesced [UTF-8?]– then pacemaker can move the NFS or iSCSI 
> endpoint. If you are even a byte out you could end up with data 
> corruption [UTF-8?]– and even if this does work I have my doubts the VMs 
> would stay online afterwards.
> 
> Anyway [UTF-8?]– as the proverb goes, the proof is in the pudding
[UTF-8?]– so I 
> can only suggest you test this out. Very interested in the result 
> though [UTF-8?]– so please let us know how you get on (if it works 
> it would be a good talk for the next user group [UTF-8?]☺ ).
> 
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
> 
> From: Jeroen Keerl <[email protected]>
> Reply-To: "[email protected]" 
> <[email protected]>, "[email protected]" 
> <[email protected]>
 Date: Wednesday, 23 November 2016 at 
> 22:43
 To: "[email protected]" 
> <[email protected]>
 Subject: AW: 2node HA ACS Cluster 
> with DRBD
> 
> Hi Dag, Erik,
> 
> thanks for your input so far.
> What I am aiming for is a "HyperConverged" infrastructure, if possible 
> with just two servers.
> 
> The reason why I didn't look into Ceph any further, is that they 
> explicitly state that they'll need 3 hosts.
 Apart from that, the 
> seems to be quite a lot of resource needs to get Ceph up & running.
> 
> DRBD and GlusterFS look like they're not that heavy on load.
> GlusterFS has moved away from 2 hosts only as well, and it seems less 
> flexible when it comes to expansion, if I recall correctly.
> 
> Hence: DRBD, which runs in Master-Slave or Dual Master mode.
> Together with Pacemaker and NFS or iSCSI software, this could work,  
> albeit -after overthinking it all- probably in a master-slave mode, 
> since the shared / clustered IP address can only be available on one 
> of two nodes.
> 
> As written before: HA-Lizard does all this out of the box, including 
> HA - if needed, and fairly well too.
> 
> Since I'll be hopping over to visit the CS User Group tomorrow, I'll 
> have no time to look into this any further until Tuesday.
> (Dag, will I have to chance to see you there as well?)
> 
> Cheers
> JK
> 
> [email protected]
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> 
> -----Ursprüngliche Nachricht-----
> Von: Dag Sonstebo
> [mailto:[email protected]<mailto:[email protected]>]
 Gesendet: Mittwoch, 23. November 2016 10:35
 An: [email protected]<mailto:[email protected]>
 Betreff: Re: 2node HA ACS Cluster with DRBD
> 
> Hi Jeroen,
> 
> My twopence worth:
> 
> First of all [UTF-8?]– I guess your plan is to run two nodes 
> [UTF-8?]–
each with 
> CloudStack management, MySQL (master-slave), KVM and storage?
> 
> This all depends on your user case. As a bit of an experiment or as a 
> small scale lab I think this may work [UTF-8?]– but I would be very 
> reluctant to rely on this for a production workload. I think you will 
> potentially have stability and integrity issues at the storage level 
> in a HA failover scenario, on top of this I [UTF-8?]don’t think this 
> will scale well. You may also end up with considerable storage 
> overhead depending on number of nodes + technology used. With two 
> nodes you immediately only have 50% max space utilization.
> 
> Putting all of that aside I think it could work, [UTF-8?]I’ve played 
> with similar ideas in the past (without actually spending enough time 
> to get it working). I think you could get around the heartbeating / 
> split brain situations relatively easily. The CloudStack and MySQL 
> installs as well as KVM should work OK, but your challenge will be 
> storage, which both has to work in the synchronized setup you want + 
> very importantly fail over gracefully. I guess you would probably look 
> at Ceph - if Wido or any of the other Ceph users read this they are 
> probably better placed to advise.
> 
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
> 
> From: Jeroen Keerl <jeroen.keerl@keerl- 
> it.com<mailto:[email protected]>>
 Reply-To: 
> "[email protected]<mailto:[email protected]>" 
> <[email protected]<mailto:[email protected]>>,
> "[email protected]<mailto:[email protected]>" 
> <[email protected]<mailto:[email protected]>>
 Date: 
> Wednesday, 23 November 2016 at 00:05
 To: 
> "[email protected]<mailto:[email protected]>" 
> <[email protected]<mailto:[email protected]>>
 
> Subject: 2node HA ACS Cluster with DRBD
> 
> All,
> 
> [UTF-8?]I’ve been pondering a bit about a possible small-scale, HA setup:
> 
> Hardware
> 2 Nodes, each with 2 1Gbe and 2 10Gbe NICs Both have hardware RAID 
> controllers, redundant PS, Array controller with BBWC and hot plug SAS 
> drives.
> 
> If [UTF-8?]we’d [UTF-8?]“combine” the quick setup for CentOS6 (
> http://docs.cloudstack.apache.org/projects/cloudstack-
> installation/en/4.9/qig.html ) with the [UTF-8?]“additional 
> management [UTF-8?]server” part (
> http://docs.cloudstack.apache.org/projects/cloudstack-
> installation/en/4.9/management-server/index.html#additional-
> management-servers ) then [UTF-8?]we’d have 2 management servers.
 To make 
> the local storage of both nodes [UTF-8?]“HA”, one could use DRBD 
> in a [UTF-8?]“primary-primary” setup, so VM migration is possible 
> as well. (I am not yet clear on how to present this storage to the 
> nodes)
> 
> To avoid split brain-situations, I see two possible strategi
> es:
> 
> 1)       Direct cabling of the 10Gbe controllers => Loss of 
> connection can only mean either one host is dead, or at least its NIC 
> has died.
> 
> 2)       Usage of IPMI / Fencing
> 
> Did anybody gather experience in a setup like that?
> Does anybody have any thoughts on this [UTF-8?]– improvements, 
> comments,
> doubts: Hit me!
> 
> Cheers,
> JK
> 
> ·         I came to this [UTF-8?]“setup”, after a few issues with Xen 
> Server 6.5 combined with [UTF-8?]“HA-Lizard”, which actually uses 
> DRBD in combination with TGTD iSCSI. On the one side, [UTF-8?]I’ve 
> had quite a few stability issues with Xen 6.5 and on the other side: 
> The CS MS needs to be outside of this cluster [UTF-8?]… but still 
> needs to be installed on a redundant / HA piece of kit.
> 
> Jeroen Keerl
> 
> Keerl IT Services GmbH
> Birkenstraße 1b . 21521 Aumühle
> 
> +49 177 6320 317
> 
> www.keerl-it.com<http://www.keerl-it.com><http://www.keerl-it.com/>
> [email protected]<mailto:[email protected]><mailto:info@keerl-
> it.com<mailto:[email protected]>>
> 
> Geschäftsführer. Jacobus J. Keerl
> Registergericht Lubeck. HRB-Nr. 14511
> 
> Unsere Allgemeine Geschäftsbedingungen finden Sie 
> hier.<http://www.keerl-it.com/AGB.pdf>
> 
> [cid:d3544f14.06fb964e.PNG.90ac9d6c]
> 
> [email protected]<mailto:[email protected]>
> www.shapeblue.com<http://www.shapeblue.com>
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> 
> Jeroen Keerl
> 
> Keerl IT Services GmbH
> Birkenstraße 1b . 21521 Aumühle
> 
> +49 177 6320 317
> 
> www.keerl-it.com<http://www.keerl-it.com/>
> [email protected]<mailto:[email protected]>
> 
> Geschäftsführer. Jacobus J. Keerl
> Registergericht Lubeck. HRB-Nr. 14511
> 
> Unsere Allgemeine Geschäftsbedingungen finden Sie 
> hier.<http://www.keerl-it.com/AGB.pdf>
> 
> [cid:d3544f14.06fb964e.PNG.d5855133]
------- End of Original Message -------





Jeroen Keerl


Keerl IT Services GmbH
Birkenstraße 1b . 21521 Aumühle

+49 177 6320 317

www.keerl-it.com
[email protected]

Geschäftsführer. Jacobus J. Keerl
Registergericht Lubeck. HRB-Nr. 14511

Unsere Allgemeine Geschäftsbedingungen finden Sie hier.


Reply via email to