[ceph-users] Re: Performance issues RGW (S3)

2024-06-13 Thread Sinan Polat
500K object size > Op 13 jun 2024 om 21:11 heeft Anthony D'Atri het > volgende geschreven: > > How large are the objects you tested with? > >> On Jun 13, 2024, at 14:46, si...@turka.nl wrote: >> >> I have doing some further testing. >> >> My RGW pool is placed on spinning disks. >> I crea

[ceph-users] Re: delete s3 bucket too slow?

2024-06-18 Thread Sinan Polat
I don’t have much experience with the dashboard. Can you try radosgw-admin bucket rm and pass --bypass-gc --purge-objects > Op 18 jun 2024 om 17:44 heeft Simon Oosthoek het > volgende geschreven: > > Hi > > when deleting an S3 bucket, the operation took longer than the time-out for > the das

[ceph-users] Re: How to change default osd reweight from 1.0 to 0.5

2024-06-19 Thread Sinan Polat
Are the weights correctly set? So 1.6 for a 1.6TB disk and 1.0 for 1TB disks and so on. > Op 19 jun 2024 om 08:32 heeft Jan Marquardt het volgende > geschreven: > >  >> Our ceph cluster uses 260 osds. >> The most highest osd usage is 87% But, The most lowest is under 40%. >> We consider low

[ceph-users] Adding OSD nodes

2025-03-17 Thread Sinan Polat
Hello, I am currently managing a Ceph cluster that consists of 3 racks, each with 4 OSD nodes. Each node contains 24 OSDs. I plan to add three new nodes, one to each rack, to help alleviate the high OSD utilization. The current highest OSD utilization is 85%. I am concerned about the possibility

[ceph-users] Re: Help in upgrading CEPH

2025-05-15 Thread Sinan Polat
What is the output of: ceph osd dump | grep require_osd_release Have you upgraded OpenStack (librados) as well? Op do 15 mei 2025 om 10:50 schreef Sergio Rabellino : > Dear list, > > we are upgrading our ceph infrastructure from mimic to octopus (please > be kind, we known that we are working

[ceph-users] Re: SWAP usage 100% on OSD hosts after migration to Rocky Linux 9 (Ceph 16.2.15)

2025-05-23 Thread Sinan Polat
Swap usage is not per se wrong - as long as you have enough available memory. However, it can still lead to performance issues. If you don't want to get rid of swap, check your swapiness settings (cat /proc/sys/vm/swappiness). Op vr 23 mei 2025 om 14:53 schreef Anthony D'Atri : > > > > Swap can p

[ceph-users] Re-uploading existing s3 objects slower than initial upload

2025-06-27 Thread Sinan Polat
Hi all, I am investigating a performance issue related to uploading s3 objects and would appreciate any insights. I have a bucket with ~200 million objects. I noticed that re-uploading files that already exist in the bucket appears to be slower than uploading them for the first time. To verify t

[ceph-users] Managing RGW container logs filling up disk space

2025-07-07 Thread Sinan Polat
Hi all, I am running a Ceph Octopus cluster deployed with cephadm. I have three RGW nodes, the RGW containers are writing their logs to /var/lib/docker/containers//-json.log Over time this causes the mount point to become 100% utilized. What is the recommended way to manage or limit these logs? -

[ceph-users] Re: Ceph OSD down (unable to mount object store)

2025-07-23 Thread Sinan Polat
Hi Vivien, Your fio test has very likely destroyed the Ceph OSD block device and the problem is not just the symlink, it's data corruption on the underlying device. Zap the drive, recreate the OSD and let your cluster rebalance. Sinan Op wo 23 jul 2025 om 14:10 schreef GLE, Vivien : > Hi, > >

[ceph-users] Re: Getting SSL Certificate Verify failed Error while installation

2025-07-30 Thread Sinan Polat
Not recommended, better to fix the problem, but what you can try is running your commands like: PYTHONHTTPSVERIFY=0 Also go to /etc/yum.repos.d/ and find the ceph repo files and add: sslverify = 0 But again, this is not recommended. It is just a not-safe workaround that might work for you. Op w