remapped no longer triggers a health warning in nautilus.

Your data is still there, it's just on the wrong OSD if that OSD is still
up and running.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Thu, Jun 6, 2019 at 10:48 PM Tarek Zegar <tze...@us.ibm.com> wrote:

> For testing purposes I set a bunch of OSD to 0 weight, this correctly
> forces Ceph to not use said OSD. I took enough out such that the UP set
> only had Pool min size # of OSD (i.e 2 OSD).
>
> Two Questions:
> 1. Why doesn't the acting set eventually match the UP set and simply point
> to [6,5] only
> 2. Why are none of the PGs marked as undersized and degraded? The data is
> only hosted on 2 OSD rather then Pool size (3), I would expect a undersized
> warning and degraded for PG with data?
>
> Example PG:
> PG 1.4d active+clean+remapped UP= [6,5] Acting = [6,5,4]
>
> OSD Tree:
> ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
> -1 0.08817 root default
> -3 0.02939 host hostosd1
> 0 hdd 0.00980 osd.0 up 1.00000 1.00000
> 3 hdd 0.00980 osd.3 up 1.00000 1.00000
> 6 hdd 0.00980 osd.6 up 1.00000 1.00000
> -5 0.02939 host hostosd2
> 1 hdd 0.00980 osd.1 up 0 1.00000
> 4 hdd 0.00980 osd.4 up 0 1.00000
> 7 hdd 0.00980 osd.7 up 0 1.00000
> -7 0.02939 host hostosd3
> 2 hdd 0.00980 osd.2 up 1.00000 1.00000
> 5 hdd 0.00980 osd.5 up 1.00000 1.00000
> 8 hdd 0.00980 osd.8 up 0 1.00000
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to