Thanks for the prompt reply.

Yes, it does. All of them are up, with the correct class that is used by
the crush algorithm.

On Thu, Feb 13, 2025 at 7:47 AM Marc <m...@f1-outsourcing.eu> wrote:

> > Hello guys,
> > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a single
> > pool that consumes all OSDs of all nodes. After adding another host, I
> > noticed that no extra space was added. Can this be a result of the
> > number
> > of PGs I am using?
> >
> > I mean, when adding more hosts/OSDs, should I always consider increasing
> > the number of PGs from a pool?
> >
>
> ceph osd tree
>
> shows all up and with correct weight?
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to