On Wed, Aug 7, 2019 at 7:05 AM Paul Emmerich wrote:
> ~ is the internal implementation of device classes. Internally it's
> still using separate roots, that's how it stays compatible with older
> clients that don't know about device classes.
>
That makes sense.
> And since it wasn't mentioned
On Wed, Aug 7, 2019 at 9:30 AM Robert LeBlanc wrote:
>> # ceph osd crush rule dump replicated_racks_nvme
>> {
>> "rule_id": 0,
>> "rule_name": "replicated_racks_nvme",
>> "ruleset": 0,
>> "type": 1,
>> "min_size": 1,
>> "max_size": 10,
>> "steps": [
>>
On 8/7/19 2:30 PM, Robert LeBlanc wrote:
... plus 11 more hosts just like this
Interesting. Please paste full `ceph osd df tree`. What is actually your
NVMe models?
Yes, our HDD cluster is much like this, but not Luminous, so we
created as separate root with SSD OSD for the metadata and set
On Wed, Aug 7, 2019 at 12:08 AM Konstantin Shalygin wrote:
> On 8/7/19 1:40 PM, Robert LeBlanc wrote:
>
> > Maybe it's the lateness of the day, but I'm not sure how to do that.
> > Do you have an example where all the OSDs are of class ssd?
> Can't parse what you mean. You always should paste you
On 8/7/19 1:40 PM, Robert LeBlanc wrote:
Maybe it's the lateness of the day, but I'm not sure how to do that.
Do you have an example where all the OSDs are of class ssd?
Can't parse what you mean. You always should paste your `ceph osd tree`
first.
Yes, we can set quotas to limit space usage
On Tue, Aug 6, 2019 at 7:56 PM Konstantin Shalygin wrote:
> Is it possible to add a new device class like 'metadata'?
>
>
> Yes, but you don't need this. Just use your existing class with another
> crush ruleset.
>
Maybe it's the lateness of the day, but I'm not sure how to do that. Do you
have
Is it possible to add a new device class like 'metadata'?
Yes, but you don't need this. Just use your existing class with another
crush ruleset.
If I set the device class manually, will it be overwritten when the OSD
boots up?
Nope. Classes assigned automatically when OSD is created, not
On Tue, Aug 6, 2019 at 11:11 AM Paul Emmerich
wrote:
> On Tue, Aug 6, 2019 at 7:45 PM Robert LeBlanc
> wrote:
> > We have a 12.2.8 luminous cluster with all NVMe and we want to take some
> of the NVMe OSDs and allocate them strictly to metadata pools (we have a
> problem with filling up this clu
On Tue, Aug 6, 2019 at 7:45 PM Robert LeBlanc wrote:
> We have a 12.2.8 luminous cluster with all NVMe and we want to take some of
> the NVMe OSDs and allocate them strictly to metadata pools (we have a problem
> with filling up this cluster and causing lingering metadata problems, and
> this w
We have a 12.2.8 luminous cluster with all NVMe and we want to take some of
the NVMe OSDs and allocate them strictly to metadata pools (we have a
problem with filling up this cluster and causing lingering metadata
problems, and this will guarantee space for metadata operations). In the
past, we hav
10 matches
Mail list logo