The min size was on 3 changing to 1 solve the problem
thanks

On Dec 10, 2016 02:06, "Christian Wuerdig" <christian.wuer...@gmail.com>
wrote:

> Hi,
>
> it's useful to generally provide some detail around the setup, like:
> What are your pool settings - size and min_size?
> What is your failure domain - osd or host?
> What version of ceph are you running on which OS?
>
> You can check which specific PGs are problematic by running "ceph health
> detail" and then you can use "ceph pg x.y query" (where x.y is a
> problematic PG identified from ceph health).
> http://docs.ceph.com/docs/jewel/rados/troubleshooting/troubleshooting-pg/
> might provide you some pointers.
>
> One obvious fix would be to get your 3rd osd server up and running again -
> but I guess you're already working on this.
>
> Cheers
> Christian
>
> On Sat, Dec 10, 2016 at 7:25 AM, fridifree <fridif...@gmail.com> wrote:
>
>> Hi,
>> 1 of 3 of my osd servers is down and I get this error
>> And I do not have any access to rbds on the cluster
>>
>> Any suggestions?
>>
>> Thank you
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to