[ceph-users] Re: CephFS Snapshot Mirroring

2025-03-20 Thread Alexander Patrakov
Hello Vladimir, Please contact croit via https://www.croit.io/contact for unofficial (not yet fully reviewed) patches and mention me. They are currently working on a similar problem and have solved at least the following: * Crashes when two directories are being mirrored in parallel * Data corrup

[ceph-users] Re: March Ceph Science Virtual User Group

2025-03-20 Thread Enrico Bocchi
Dear All, An update on the Ceph Science User Group meeting next Tuesday: Mattia from the Federal Institute of Technology Zurich (ETH Zurich) has kindly agreed to present on CephFS performance. Also, the correct meeting time is 9am Central US, 2pm UTC, 3pm Central European -- My apologies for

[ceph-users] Re: One host down osd status error

2025-03-20 Thread Tim Holloway
Based on my experience, that error comes from 1 of 3 possible causes: 1. The machine in question doesn't have proper security keys 2. The machine in question is short on resources - especially RAM 3. The machine in question has its brains scrambled. Cosmic rays flipping critical RAM bits, bugs

[ceph-users] CephFS Snapshot Mirroring

2025-03-20 Thread Vladimir Cvetkovic
Hi everyone, We have two remote clusters and are trying to set up snapshot mirroring. Our directories are in a syncing state but never seem to actually sync the snapshots. Snapshot mirroring seems way to slow for large file systems - does anyone have a working setup? Thanks!

[ceph-users] Re: Experience with 100G Ceph in Proxmox

2025-03-20 Thread Eneko Lacunza
Hi Chris, I tried KRBD, even with a newly created disk and after shuting down and starting VM again, but no measurable difference. Our Ceph is 18.2.4, that may be a factor to consider, but 9k -> 273k?! Maybe Giovanna can test KRBD option and report back... :) Cheers El 20/3/25 a las 16:19,

[ceph-users] Re: Experience with 100G Ceph in Proxmox

2025-03-20 Thread Chris Palmer
HI Eneko No containers. In the Promox console go to Datacenter\Storage, click on the storage you are using, then Edit. There is a tick box KRBD. With that set, any virtual disks created in that storage will use KRBD rather than librbd. So it applies to all VMs that use that storage. Chris O

[ceph-users] Re: Experience with 100G Ceph in Proxmox

2025-03-20 Thread Eneko Lacunza
Hi Giovanna, I just tested one of my VMs: # fio --name=registry-read --ioengine=libaio --rw=randread --bs=4k --numjobs=4 --size=1G --runtime=60 --group_reporting registry-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 registry-read: (g

[ceph-users] Re: Experience with 100G Ceph in Proxmox

2025-03-20 Thread Giovanna Ratini
Hello Eneko, this is my configuration. The performance is similar across all VMs. I am now checking GitLab, as that is where people are complaining the most. agent: 1 balloon: 65000 bios: ovmf boot: order=scsi0;net0 cores: 10 cpu: host efidisk0: cephvm:vm-6506-disk-0,efitype=4m,size=528K memor

[ceph-users] One host down osd status error

2025-03-20 Thread Marcus
Hi all, We are running a ceph cluster with filesystem that contains 5 servers. Ceph version: 19.2.0 squid If I run: ceph osd status when all hosts are online and in the output is the way it should and it prints status for all osds. If just a couple of osds are down status is printed and specific

[ceph-users] Re: Experience with 100G Ceph in Proxmox

2025-03-20 Thread Eneko Lacunza
Hi Giovanna, Can you post VM's full config? Also, can you test with IO thread enabled and SCSI virtio single, and multiple disks? Cheers El 19/3/25 a las 17:27, Giovanna Ratini escribió: hello Eneko, Yes I did.  No significant changes.  :-( Cheers, Gio Am Mittwoch, März 19, 2025 13:09

[ceph-users] Re: Kafka notification, bad certificate

2025-03-20 Thread Frédéric Nass
Hi Malte, Yeah, I just wanted to make you aware of this separate Kafka bug in Quincy and Reef v18.2.4. Regarding your issue, if your Kafka installation requires mTLS and mTLS hasn't been coded in RGW yet (as you pointed [1]), I think you figured it out. Also, the logs showing a first successful

[ceph-users] Re: Subject: Assistance Required: Vault Integration with RADOS Gateway for SSE-S3 Encryption

2025-03-20 Thread Dhivya G
Hello there,  Could you please provide support on the query mentioned below and suggest how to handle the error to fix it. Your suggestions would be beneficial.Looking forward to your response!            Dhivya G Associate Software Engineer Email: dhivyamailto:pavithraa...@zyb