can you share `ceph daemon osd.8 config show` and `ceph config dump`?
On Sun, Oct 2, 2022 at 5:10 AM Nicola Mori wrote:
> Dear Ceph users,
>
> I put together a cluster by reusing some (very) old machines with low
> amounts of RAM, as low as 4 GB for the worst case. I'd need to set
> osd_memory_
Hi Tim,
You might want to check you pool utilization and see if there are
enough pg's in that pool. Higher GB per pg can result in this scenario.
I am also assuming that you have the balancer module turn on (ceph balancer
status) should tell you that as well.
If you have enough pgs in the bigger
st, and I've adjusted as per its
> recommendation.
>
> Tim.
>
> On Mon, Oct 24, 2022 at 09:24:58AM -0400, Joseph Mundackal wrote:
> > Hi Tim,
> > You might want to check you pool utilization and see if there are
> > enough pg's in that pool. Higher GB per
If the GB per pg is high, the balancer module won't be able to help.
Your pg count per osd also looks low (30's), so increasing pgs per pool
would help with both problems.
You can use the pg calculator to determine which pools need what
On Tue, Nov 1, 2022, 08:46 Denis Polom wrote:
> Hi
>
> I
looking correctly to 'ceph osd df' output
> I posted I see there are about 195 PGs per OSD.
>
> There are 608 OSDs in the pool, which is the only data pool. What I have
> calculated - PG calc says that PG number is fine.
>
>
> On 11/1/22 14:03, Joseph Mundackal wrote:
&