Hello Vladimir,
Please contact croit via https://www.croit.io/contact for unofficial
(not yet fully reviewed) patches and mention me. They are currently
working on a similar problem and have solved at least the following:
* Crashes when two directories are being mirrored in parallel
* Data corrup
Dear All,
An update on the Ceph Science User Group meeting next Tuesday:
Mattia from the Federal Institute of Technology Zurich (ETH Zurich) has
kindly agreed to present on CephFS performance.
Also, the correct meeting time is 9am Central US, 2pm UTC, 3pm Central
European -- My apologies for
Based on my experience, that error comes from 1 of 3 possible causes:
1. The machine in question doesn't have proper security keys
2. The machine in question is short on resources - especially RAM
3. The machine in question has its brains scrambled. Cosmic rays
flipping critical RAM bits, bugs
Hi everyone,
We have two remote clusters and are trying to set up snapshot mirroring.
Our directories are in a syncing state but never seem to actually sync the
snapshots.
Snapshot mirroring seems way to slow for large file systems - does anyone
have a working setup?
Thanks!
Hi Chris,
I tried KRBD, even with a newly created disk and after shuting down and
starting VM again, but no measurable difference.
Our Ceph is 18.2.4, that may be a factor to consider, but 9k -> 273k?!
Maybe Giovanna can test KRBD option and report back... :)
Cheers
El 20/3/25 a las 16:19,
HI Eneko
No containers. In the Promox console go to Datacenter\Storage, click on
the storage you are using, then Edit. There is a tick box KRBD. With
that set, any virtual disks created in that storage will use KRBD rather
than librbd. So it applies to all VMs that use that storage.
Chris
O
Hi Giovanna,
I just tested one of my VMs:
# fio --name=registry-read --ioengine=libaio --rw=randread --bs=4k
--numjobs=4 --size=1G --runtime=60 --group_reporting
registry-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B,
(T) 4096B-4096B, ioengine=libaio, iodepth=1
registry-read: (g
Hello Eneko,
this is my configuration. The performance is similar across all VMs. I
am now checking GitLab, as that is where people are complaining the most.
agent: 1
balloon: 65000
bios: ovmf
boot: order=scsi0;net0
cores: 10
cpu: host
efidisk0: cephvm:vm-6506-disk-0,efitype=4m,size=528K
memor
Hi all,
We are running a ceph cluster with filesystem that contains 5 servers.
Ceph version: 19.2.0 squid
If I run: ceph osd status when all hosts are online and in the output
is the way it should and it prints status for all osds. If just a couple
of osds are down status is printed and specific
Hi Giovanna,
Can you post VM's full config?
Also, can you test with IO thread enabled and SCSI virtio single, and
multiple disks?
Cheers
El 19/3/25 a las 17:27, Giovanna Ratini escribió:
hello Eneko,
Yes I did. No significant changes. :-(
Cheers,
Gio
Am Mittwoch, März 19, 2025 13:09
Hi Malte,
Yeah, I just wanted to make you aware of this separate Kafka bug in Quincy and
Reef v18.2.4.
Regarding your issue, if your Kafka installation requires mTLS and mTLS hasn't
been coded in RGW yet (as you pointed [1]), I think you figured it out.
Also, the logs showing a first successful
Hello there,
Could you please provide support on the query mentioned below and suggest how
to handle the error to fix it.
Your suggestions would be beneficial.Looking forward to your response!
Dhivya G
Associate Software Engineer
Email: dhivyamailto:pavithraa...@zyb
12 matches
Mail list logo