There are 16 hosts in the root associated with that ec rule.
[ceph-admin@admin libr-cluster]$ ceph osd lspools
1 cephfs_data,2 cephfs_metadata,35 vmware_rep,36 rbd,38 one,44 nvme,48
iscsi-primary,49 iscsi-secondary,50 it_share,55 vmware_ssd,56
vmware_ssd_metadata,57 vmware_ssd_2_1,
[ceph-admin@ad
I think you dont have enough hosts for your ec pool crush rule.
if your failure domain is host, then you need at least ten hosts.
On Wed, Oct 24, 2018 at 9:39 PM Brady Deetz wrote:
>
> My cluster (v12.2.8) is currently recovering and I noticed this odd OSD ID in
> ceph health detail:
> "214748364
My cluster (v12.2.8) is currently recovering and I noticed this odd OSD ID
in ceph health detail:
"2147483647"
[ceph-admin@admin libr-cluster]$ ceph health detail | grep 2147483647
pg 50.c3 is stuck undersized for 148638.689866, current state
active+recovery_wait+undersized+degraded+remapped,
On Wed, Oct 24, 2018 at 1:43 AM Florent B wrote:
> Hi,
>
> On a Luminous cluster having some misplaced and degraded objects after
> outage :
>
> health: HEALTH_WARN
> 22100/2496241 objects misplaced (0.885%)
> Degraded data redundancy: 964/2496241 objects degraded
> (0.039
Thanks Wido. That seems to have worked. I just had to pass the keyring
and monmap when calling mkfs. I saved the keyring from the monitors
data directory and used that, then I obtained the monmap using ceph
mon getmap -o /var/tmp/monmap.
After starting the monitor it synchronized and recreated the
Matt, Thank you very much!
Matt Benjamin 于2018年10月24日周三 下午8:16写道:
> We recommend disabling dynamic resharding until fixes for two known
> issues, plus a radosgw-admin command to remove traces of old dynamic
> reshard runs, land in master and Luminous (shortly for at least the
> first of these fi
We recommend disabling dynamic resharding until fixes for two known
issues, plus a radosgw-admin command to remove traces of old dynamic
reshard runs, land in master and Luminous (shortly for at least the
first of these fixes).
Matt
On Wed, Oct 24, 2018 at 6:46 AM, Ch Wan wrote:
> Hi. I encounte
Den ons 24 okt. 2018 kl 13:09 skrev Florent B :
> On a Luminous cluster having some misplaced and degraded objects after
> outage :
>
> health: HEALTH_WARN
> 22100/2496241 objects misplaced (0.885%)
> Degraded data redundancy: 964/2496241 objects degraded
> (0.039%), 3 p
> g
Hi. I encounter this problem these days.
The clients get timeout exception while rgw doing resharding.
Could we decrease the impact of dynamic resharding by adjusting some
configurations, or just increase the timeout threshold at client-side?
Jakub Jaszewski 于2018年7月16日周一 下午11:25写道:
> Hi,
> We r