Hi Ilya,

> On 3 May 2021, at 14:15, Ilya Dryomov <idryo...@gmail.com> wrote:
> 
> I don't think empty directories matter at this point.  You may not have
> had 12 OSDs at any point in time, but the max_osd value appears to have
> gotten bumped when you were replacing those disks.
> 
> Note that max_osd being greater than the number of OSDs is not a big
> problem by itself.  The osdmap is going to be larger and require more
> memory but that's it.  You can test by setting it back to 12 and trying
> to mount -- it should work.  The issue is specific to how to those OSDs
> were replaced -- something went wrong and the osdmap somehow ended up
> with rather bogus addrvec entries.  Not sure if it's ceph-deploy's
> fault, something weird in ceph.conf (back then) or a an actual ceph
> bug.

What actuality is bug? When max_osds > total_osd_in?
What kernel's was affected?

For example, max_osds is 132, total_osds_in in 126, max osd number is 131 - is 
affected?



Thanks,
k
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to