Yes, everything has finished converging already.

On Thu, Feb 13, 2025 at 12:33 PM Janne Johansson <icepic...@gmail.com>
wrote:

> Den tors 13 feb. 2025 kl 12:54 skrev Work Ceph
> <work.ceph.user.mail...@gmail.com>:
> > Thanks for the feedback!
> > Yes, the Heath_ok is there.]
> > The OSD status show all of them as "exists,up".
> >
> > The interesting part is that "ceph df" shows the correct values in the
> "RAW
> > STORAGE" section. However, for the SSD pool I have, it shows only the
> > previous value as the max usable value.
> > I had 384 TiB as the RAW space before. The SSD pool is a replicated pool
> > with replica as 3. Therefore, I had about 128TiB as possible usable space
> > for the pool before. Now that I added a new node, I would expect 480 RAW
> > space, which is what I have in the RAW STORAGE section, but the usable
> > space to be used in the pool has not changed. I would expect the usable
> > space to grow at about 160TiB. I know that these limits will never be
> > reached as we have locks in 85%-90% for each OSD.
>
> Has all PGs moved yet? If not, then you have to wait until the old
> OSDs have moved PGs over the the newly added ones.
>
> --
> May the most significant bit of your life be positive.
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to