Hi again,
turns out the long bootstrap time was my own fault. I had some down&out
OSDs for quite a long time, which prohibited the monitor from pruning
the OSD maps. Makes sense, when I think about it, but I didn't before.
Rich's hint to get the cluster to health OK first pointed me in the
right d
This is new min_alloc_size for bluestore. 4K mkfs required more time and
process is single threaded I think
It's normal
k
> On 9 Jun 2021, at 14:21, Jan-Philipp Litza wrote:
>
> I mean freshly deployed OSDs. Restarted OSDs don't exhibit that behavior.
Hi Konstantin,
I mean freshly deployed OSDs. Restarted OSDs don't exhibit that behavior.
Best regards,
Jan-Philipp
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Rich,
> I've noticed this a couple of times on Nautilus after doing some large
> backfill operations. It seems the osd map doesn't clear properly after
> the cluster returns to Health OK and builds up on the mons. I do a
> "du" on the mon folder e.g. du -shx /var/lib/ceph/mon/ and this shows
>
Hi,
You mean new fresh deployed OSD's or old just restarted OSD's?
Thanks,
k
Sent from my iPhone
> On 8 Jun 2021, at 23:30, Jan-Philipp Litza wrote:
>
> recently I'm noticing that starting OSDs for the first time takes ages
> (like, more than an hour) before they are even picked up by the mo
Hi Jan-Philipp,
I've noticed this a couple of times on Nautilus after doing some large
backfill operations. It seems the osd map doesn't clear properly after
the cluster returns to Health OK and builds up on the mons. I do a
"du" on the mon folder e.g. du -shx /var/lib/ceph/mon/ and this shows
seve