Hi Nathan

Is that true?

The time it takes to reallocate the primary pg delivers “downtime” by design.  
right? Seen from a writing clients perspective 

Jesper



Sent from myMail for iOS


Friday, 29 November 2019, 06.24 +0100 from pen...@portsip.com  
<pen...@portsip.com>:
>Hi Nathan, 
>
>Thanks for the help.
>My colleague will provide more details.
>
>BR
>On Fri, Nov 29, 2019 at 12:57 PM Nathan Fish < lordci...@gmail.com > wrote:
>>If correctly configured, your cluster should have zero downtime from a
>>single OSD or node failure. What is your crush map? Are you using
>>replica or EC? If your 'min_size' is not smaller than 'size', then you
>>will lose availability.
>>
>>On Thu, Nov 28, 2019 at 10:50 PM Peng Bo < pen...@portsip.com > wrote:
>>>
>>> Hi all,
>>>
>>> We are working on use CEPH to build our HA system, the purpose is the 
>>> system should always provide service even a node of CEPH is down or OSD is 
>>> lost.
>>>
>>> Currently, as we practiced once a node/OSD is down, the CEPH cluster needs 
>>> to take about 40 seconds to sync data, our system can't provide service 
>>> during that.
>>>
>>> My questions:
>>>
>>> Does there have any way that we can reduce the data sync time?
>>> How can we let the CEPH keeps available once a node/OSD is down?
>>>
>>>
>>> BR
>>>
>>> --
>>> The modern Unified Communications provider
>>>
>>>  https://www.portsip.com
>>> _______________________________________________
>>> ceph-users mailing list
>>>  ceph-users@lists.ceph.com
>>>  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>-- 
>The modern Unified Communications provider
>
>https://www.portsip.com
>_______________________________________________
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to