You could also set *osd_crush_initial_weight = 0 . *New OSDs will
automatically come up with a 0 weight and you won't have to race the clock.

-Brett

On Thu, Oct 4, 2018 at 3:50 AM Janne Johansson <icepic...@gmail.com> wrote:

>
>
> Den tors 4 okt. 2018 kl 00:09 skrev Bruno Carvalho <bruno...@gmail.com>:
>
>> Hi Cephers, I would like to know how you are growing the cluster.
>> Using dissimilar hardware in the same pool or creating a pool for each
>> different hardware group.
>> What problem would I have many problems using different hardware (CPU,
>> memory, disk) in the same pool?
>
>
> I don't think CPU and RAM (and other hw related things like HBA controller
> card brand) matters
> a lot, more is always nicer, but as long as you don't add worse machines
> like Jonathan wrote you
> should not see any degradation.
>
> What you might want to look out for is if the new disks are very uneven
> compared to the old
> setup, so if you used to have servers with 10x2TB drives and suddenly add
> one with 2x10TB,
> things might become very unbalanced, since those differences will not be
> handled seamlessly
> by the crush map.
>
> Apart from that, the only issues for us is "add drives, quickly set crush
> reweight to 0.0 before
> all existing OSD hosts shoot massive amounts of I/O on them, then script a
> slower raise of
> crush weight upto what they should end up at", to lessen the impact for
> our 24/7 operations.
>
> If you have weekends where noone accesses the cluster or night-time low-IO
> usage patterns,
> just upping the weight at the right hour might suffice.
>
> Lastly, for ssd/nvme setups with good networking, this is almost moot,
> they converge so fast
> its almost unfair. A real joy working with expanding flash-only
> pools/clusters.
>
> --
> May the most significant bit of your life be positive.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to