Hi Samual,
Not sure if you know but if you don't use the default CRUSH map, you can
also use custom location hooks. This can be used to bring your osds into
the correct place in the CRUSH map the first time they start.
https://docs.ceph.com/en/quincy/rados/operations/crush-map/#custom-location-h
Hey,
A question slightly related to this:
> I would suggest that you add all new hosts and make the OSDs start
> > with a super-low initial weight (0.0001 or so), which means they will
> > be in and up, but not receive any PGs.
Is it possible to have the correct weight set and use ceph osd set
: Best practice for expanding Ceph cluster
Hi Samuel,
Both pgremapper and the CERN scripts were developed against Luminous,
and in my experience 12.2.13 has all of the upmap patches needed for
the scheme that Janne outlined to work. However, if you have a complex
CRUSH map sometimes the upmap
Hi Samuel,
Both pgremapper and the CERN scripts were developed against Luminous,
and in my experience 12.2.13 has all of the upmap patches needed for
the scheme that Janne outlined to work. However, if you have a complex
CRUSH map sometimes the upmap balancer can struggle, and I think
that's true
Janne,
thanks a lot for the detailed scheme. I totally agree that the upmap approach
would be one of best methods, however, my current cluster is working on
Luminious 12.2.13 version and upmap seems not work reliably on Lumnious.
samuel
huxia...@horebdata.cn
From: Janne Johansson
Date: 202
Den tors 4 maj 2023 kl 10:39 skrev huxia...@horebdata.cn
:
> Dear Ceph folks,
>
> I am writing to ask for advice on best practice of expanding ceph cluster. We
> are running an 8-node Ceph cluster and RGW, and would like to add another 10
> node, each of which have 10x 12TB HDD. The current 8-nod