[ceph-users] Re: osd_memory_target for low-memory machines

2022-10-03 Thread Nicola Mori
There is definitely something wrong in how my cluster manages osd_memory_target. For example, this is the situation for OSD 16: # ceph config get osd.6 osd_memory_target 280584038 # ceph orch ps ogion --daemon_type osd --daemon_id 16 NAMEHOST PORTS STATUS REFRESHED AGE MEM USE

[ceph-users] Re: osd_memory_target for low-memory machines

2022-10-03 Thread Dan van der Ster
Hi, 384MB is far too low for a Ceph OSD. The warning is telling you that it's below the min. Cheers, Dan On Sun, Oct 2, 2022 at 11:10 AM Nicola Mori wrote: > > Dear Ceph users, > > I put together a cluster by reusing some (very) old machines with low > amounts of RAM, as low as 4 GB for the wo

[ceph-users] Re: osd_memory_target for low-memory machines

2022-10-03 Thread Janne Johansson
> There is definitely something wrong in how my cluster manages > osd_memory_target. For example, this is the situation for OSD 16: > The memory limit seems to be correctly set (I disabled the memory > autotune on the host, set the limit manually with --force and rebooted > the host) but neverthele

[ceph-users] Re: iscsi deprecation

2022-10-03 Thread Stefan Kooman
On 9/30/22 19:36, Filipe Mendes wrote: Hello! I'm considering switching my current storage solution to CEPH. Today we use iscsi as a communication protocol and we use several different hypervisors: VMware, hyper-v, xcp-ng, etc. I was reading that the current version of CEPH has discontinued i

[ceph-users] Re: osd_memory_target for low-memory machines

2022-10-03 Thread Nicola Mori
Thank you for the detailed answer. I'll buy more RAM for the machine. Nicola On 03/10/22 10:02, Janne Johansson wrote: There is definitely something wrong in how my cluster manages osd_memory_target. For example, this is the situation for OSD 16: The memory limit seems to be correctly set (I di

[ceph-users] Stuck in upgrade

2022-10-03 Thread Jan Marek
Hello, I've problem with our ceph cluster - I've stucked in upgrade process between versions 16.2.7 and 17.2.3. My problem is, that I have upgraded MON, MGR, MDS processes, and when I started upgrade OSDs, ceph tell me, that I cannot add OSD with that version to cluster, because I have problem wi

[ceph-users] Convert mon kv backend to rocksdb

2022-10-03 Thread Reed Dier
Recently reading this thread: https://www.mail-archive.com/ceph-users@ceph.io/msg16705.html And out of curiosity I decided to take a look, and it turns out, 2/3 of my mons are using rocksdb, while I still somehow have a leveldb mon

[ceph-users] Re: osd_memory_target for low-memory machines

2022-10-03 Thread Mark Nelson
Hi Nicola, I wrote the autotuning code in the OSD.  Janne's response is absolutely correct.  Right now we just control the size of the caches in the OSD and rocksdb to try to keep the OSD close to a certain memory limit.  By default this works down to around 2GB, but the smaller the limit, th

[ceph-users] Re: How to remove remaining bucket index shard objects

2022-10-03 Thread 伊藤 祐司
Hi, > Try to deep-scrub all PG's of your index pool After removing the index objects, I ran deep-scrub for all PGs of the index pool. However, the problem wasn't resolved. Thanks, Yuji ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe s