[ceph-users] Re: How big an OSD disk could be?

2021-03-14 Thread Anthony D'Atri
> After you have filled that up, if such a host crashes or needs > maintenance, another 80-100TB will need recreating from the other huge > drives. A judicious setting of mon_osd_down_out_subtree_limit can help mitigate the thundering herd FWIW. > I don't think there are specific limitations on

[ceph-users] Re: Safe to remove osd or not? Which statement is correct?

2021-03-14 Thread Boris Behrens
Hi, do you know why the OSDs are not starting? When I had the problem that a start does not work, I tried the 'ceph-volume lvm activate --all' on the host, which brought the OSDs back up. But I can't tell you if it is safe to remove the OSD. Cheers Boris Am So., 14. März 2021 um 02:38 Uhr schr