[ceph-users] Re: osd_memory_target for low-memory machines

2022-10-02 Thread Joseph Mundackal
can you share `ceph daemon osd.8 config show` and `ceph config dump`? On Sun, Oct 2, 2022 at 5:10 AM Nicola Mori wrote: > Dear Ceph users, > > I put together a cluster by reusing some (very) old machines with low > amounts of RAM, as low as 4 GB for the worst case. I'd need to set > osd_memory_

[ceph-users] Re: Advice on balancing data across OSDs

2022-10-24 Thread Joseph Mundackal
Hi Tim, You might want to check you pool utilization and see if there are enough pg's in that pool. Higher GB per pg can result in this scenario. I am also assuming that you have the balancer module turn on (ceph balancer status) should tell you that as well. If you have enough pgs in the bigger

[ceph-users] Re: Advice on balancing data across OSDs

2022-10-24 Thread Joseph Mundackal
st, and I've adjusted as per its > recommendation. > > Tim. > > On Mon, Oct 24, 2022 at 09:24:58AM -0400, Joseph Mundackal wrote: > > Hi Tim, > > You might want to check you pool utilization and see if there are > > enough pg's in that pool. Higher GB per

[ceph-users] Re: OSDs are not utilized evenly

2022-11-01 Thread Joseph Mundackal
If the GB per pg is high, the balancer module won't be able to help. Your pg count per osd also looks low (30's), so increasing pgs per pool would help with both problems. You can use the pg calculator to determine which pools need what On Tue, Nov 1, 2022, 08:46 Denis Polom wrote: > Hi > > I

[ceph-users] Re: OSDs are not utilized evenly

2022-11-04 Thread Joseph Mundackal
looking correctly to 'ceph osd df' output > I posted I see there are about 195 PGs per OSD. > > There are 608 OSDs in the pool, which is the only data pool. What I have > calculated - PG calc says that PG number is fine. > > > On 11/1/22 14:03, Joseph Mundackal wrote: &