Den fre 27 maj 2022 kl 18:26 skrev Sarunas Burdulis
:
> Thanks. I don't recall creating any of the default.* pools, so they
> might have created by ceph-deploy, years ago (kraken?). They all have
> min_size 1, replica 2.
Those are automatically created by radosgw when it starts.
--
May the most
Try this:
ceph osd crush reweight osd.XX 0
--Mike
On 5/28/22 15:02, Nico Schottelius wrote:
Good evening dear fellow Ceph'ers,
when removing OSDs from a cluster, we sometimes use
ceph osd reweight osd.XX 0
and wait until the OSD's content has been redistributed. However, when
then fin
Hi,
draining is initialized by
ceph osd crush reweight osd. 0
28. 5. 2022 22:09:05 Nico Schottelius :
>
> Good evening dear fellow Ceph'ers,
>
> when removing OSDs from a cluster, we sometimes use
>
> ceph osd reweight osd.XX 0
>
> and wait until the OSD's content has been redistributed
Hi- I started to update my 3 host cluster to RHEL 9, but came across a bit of a
stumbling block.
The upgrade process uses the RHEL leapp process, which ran through a few simple
things to clean up, and told me everything was hunky dory, but when I kicked
off the first server, the server wouldn't
hi
i have a error in delete service from dashboard
ceph version is 16.2.6
HEALTH_ERR Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed:
iSCSI gateway 'opcpmfpsbpp0101' does not exist retval: -2
[ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: dashboard
iscsi-gateway-rm failed: iSC