Yes, the bucket that represents the new host is under the ROOT bucket as
the others. Also, the OSDs are in the right/expected bucket.

I am guessing that the problem is the number of PGs. I have 120 OSDs across
all hosts, and I guess that 512 PGS, which is what the pool is using, is
not enough. I did not change it yet, because I wanted to understand the
effect on PG number in Ceph pool usable volume.

On Thu, Feb 13, 2025 at 12:03 PM Anthony D'Atri <anthony.da...@gmail.com>
wrote:

> Does the new host show up under the proper CRUSH bucket?  Do its OSDs?
> Send `ceph osd tree` please.
>
>
> >>
> >>
> >>      > Hello guys,
> >>      > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
> >> single
> >>      > pool that consumes all OSDs of all nodes. After adding another
> >> host, I
> >>      > noticed that no extra space was added. Can this be a result of
> >> the
> >>      > number
> >>      > of PGs I am using?
> >>      >
> >>      > I mean, when adding more hosts/OSDs, should I always consider
> >> increasing
> >>      > the number of PGs from a pool?
> >>      >
> >>
> >>      ceph osd tree
> >>
> >>      shows all up and with correct weight?
> >>
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to