Anyone tried the posix backend for Ceph radosgw? Appreciate any pointers
here on configuring and testing.
Varada
On Fri, Feb 14, 2025 at 12:40 PM Varada Kari wrote:
> Hi,
>
> I am trying to test the posix backend for radosgw in my test machine. It
> is running Ubuntu 22.04 with the latest squid
Hi,
this SUSE article [0] covers that, it helped us with a customer a few
years ago. The recommendation was to double the
mds_bal_fragment_size_max (default 100k) to 200k, which worked nicely
for them. Also note the mentioned correlation between
mds_bal_fragment_size_max and mds_cache_mem
Best Regards,
Vignesh Varma G
Cloud Engineer
www.stackbill.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Team,
I have set up a 2 Ceph cluster 3-node each cluster with a two-way RBD mirror.
In this setup, Ceph 1 is configured two-way mirror to Ceph 2, and vice versa.
The RBD pools are integrated with CloudStack.
The Ceph cluster uses NVMe drives, but I am experiencing very low IOPS
performance.
You mention RBD, but you give FIO a filename. Are you writing to a file on
filesystem on an RBD volume? Are you testing from a VM? From one of the cluster
nodes? Via a KRBD mount?
Do you get better results with the volume unattached and using the librbd
ioengine?
What does the rbd mirror st
This might be helpful -
https://github.com/mmgaggle/zgw/tree/posix
On Sat, Feb 15, 2025 at 05:38 Varada Kari wrote:
> Anyone tried the posix backend for Ceph radosgw? Appreciate any pointers
> here on configuring and testing.
>
> Varada
>
> On Fri, Feb 14, 2025 at 12:40 PM Varada Kari
> wrote