Re: [ceph-users] general ceph cluster design

2016-11-28 Thread nick
Hi Ben, thanks for the information as well. It looks like we first will do some latency tests between our data centers (thanks for the netem hint), before deciding which topology is best for us. For simple DR scenarios rbd mirroring sounds like the better solution so far. We are still fans of th

Re: [ceph-users] general ceph cluster design

2016-11-28 Thread Benjeman Meekhof
Hi Nick, We have a Ceph cluster spread across 3 datacenters at 3 institutions in Michigan (UM, MSU, WSU). It certainly is possible. As noted you will have increased latency for write operations and overall reduced throughput as latency increases. Latency between our sites is 3-5ms. We did some

Re: [ceph-users] general ceph cluster design

2016-11-27 Thread nick
Hi Maxime, thank you for the information given. We will have a look and check. Cheers Nick On Friday, November 25, 2016 09:48:35 PM Maxime Guyot wrote: > Hi Nick, > > See inline comments. > > Cheers, > Maxime > > On 25/11/16 16:01, "ceph-users on behalf of nick" > wrote: > > >Hi, > >

Re: [ceph-users] general ceph cluster design

2016-11-25 Thread Maxime Guyot
Hi Nick, See inline comments. Cheers, Maxime On 25/11/16 16:01, "ceph-users on behalf of nick" wrote: >Hi, >we are currently planning a new ceph cluster which will be used for >virtualization (providing RBD storage for KVM machines) and we have some >general questions. >

[ceph-users] general ceph cluster design

2016-11-25 Thread nick
Hi, we are currently planning a new ceph cluster which will be used for virtualization (providing RBD storage for KVM machines) and we have some general questions. * Is it advisable to have one ceph cluster spread over multiple datacenters (latency is low, as they are not so far from each other