[ceph-users] Re: CEPH failure domain - power considerations

2020-05-29 Thread Brian Topping
lure rate (MTBF, or similar) > to determine if your service guarantees are impacted. > > Dominic L. Hilsbos, MBA > Director – Information Technology > Perform Air International Inc. > dhils...@performair.com > www.PerformAir.com > > > -----Original Message

[ceph-users] Re: CEPH failure domain - power considerations

2020-05-29 Thread DHilsbos
Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. dhils...@performair.com www.PerformAir.com -Original Message- From: Phil Regnauld [mailto:p...@x0.dk] Sent: Friday, May 29, 2020 12:59 AM To: Hans van den Bogert Cc: ceph-users@ceph.io Subject: [ceph

[ceph-users] Re: CEPH failure domain - power considerations

2020-05-29 Thread Max Krasilnikov
Hello! Fri, May 29, 2020 at 09:58:58AM +0200, pr wrote: > Hans van den Bogert (hansbogert) writes: > > I would second that, there's no winning in this case for your requirements > > and single PSU nodes. If there were 3 feeds,  then yes; you could make an > > extra layer in your crushmap much l

[ceph-users] Re: CEPH failure domain - power considerations

2020-05-29 Thread Phil Regnauld
Burkhard Linke (Burkhard.Linke) writes: > > Buy some power transfer switches. You can connect those to the two PDUs, and > in case of a power failure on one PDUs they will still be able to use the > second PDU. ATS = power switches (in my original mail). > We only use them for "small" ma

[ceph-users] Re: CEPH failure domain - power considerations

2020-05-29 Thread Phil Regnauld
Hans van den Bogert (hansbogert) writes: > I would second that, there's no winning in this case for your requirements > and single PSU nodes. If there were 3 feeds,  then yes; you could make an > extra layer in your crushmap much like you would incorporate a rack topology > in the crushmap.

[ceph-users] Re: CEPH failure domain - power considerations

2020-05-29 Thread Phil Regnauld
Chris Palmer (chris.palmer) writes: > Immediate thought: Forget about crush maps, osds, etc. If you lose half the > nodes (when one power rail fails) your MONs will lose quorum. I don't see > how you can win with that configuration... That's a good point, I'll have to think that one throug

[ceph-users] Re: CEPH failure domain - power considerations

2020-05-28 Thread EDH - Manuel Rios
original- De: Burkhard Linke Enviado el: jueves, 28 de mayo de 2020 15:25 Para: ceph-users@ceph.io Asunto: [ceph-users] Re: CEPH failure domain - power considerations Hi, On 5/28/20 2:18 PM, Phil Regnauld wrote: > Hi, in our production cluster, we have the following setup *snipsnap*

[ceph-users] Re: CEPH failure domain - power considerations

2020-05-28 Thread Burkhard Linke
Hi, On 5/28/20 2:18 PM, Phil Regnauld wrote: Hi, in our production cluster, we have the following setup *snipsnap* Buy some power transfer switches. You can connect those to the two PDUs, and in case of a power failure on one PDUs they will still be able to use the second PDU. We only

[ceph-users] Re: CEPH failure domain - power considerations

2020-05-28 Thread Hans van den Bogert
I would second that, there's no winning in this case for your requirements and single PSU nodes. If there were 3 feeds,  then yes; you could make an extra layer in your crushmap much like you would incorporate a rack topology in the crushmap. On 5/28/20 2:42 PM, Chris Palmer wrote: Immediate t

[ceph-users] Re: CEPH failure domain - power considerations

2020-05-28 Thread Chris Palmer
Immediate thought: Forget about crush maps, osds, etc. If you lose half the nodes (when one power rail fails) your MONs will lose quorum. I don't see how you can win with that configuration... On 28/05/2020 13:18, Phil Regnauld wrote: Hi, in our production cluster, we have the following setup