2018-06-13 7:13 GMT+02:00 Marc Roos <m.r...@f1-outsourcing.eu>:

> I just added here 'class hdd'
>
> rule fs_data.ec21 {
>         id 4
>         type erasure
>         min_size 3
>         max_size 3
>         step set_chooseleaf_tries 5
>         step set_choose_tries 100
>         step take default class hdd
>         step choose indep 0 type osd
>         step emit
> }
>

somewhat off-topic, but: 2/1 erasure coding is usually a bad idea for the
same reasons that size = 2 replicated pools are a bad idea.


Paul


>
>
> -----Original Message-----
> From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
> Sent: woensdag 13 juni 2018 12:30
> To: Marc Roos; ceph-users
> Subject: *****SPAM***** Re: *****SPAM***** Re: [ceph-users] Add ssd's to
> hdd cluster, crush map class hdd update necessary?
>
> On 06/13/2018 12:06 PM, Marc Roos wrote:
> > Shit, I added this class and now everything start backfilling (10%)
> > How is this possible, I only have hdd's?
>
> This is normal when you change your crush and placement rules.
> Post your output, I will take a look
>
> ceph osd crush tree
> ceph osd crush dump
> ceph osd pool ls detail
>
>
>
>
>
> k
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to