Hi All,
I just wanted to quick follow-up on my previous mail about "Slow RGW multisite
sync due to '304 Not Modified' responses on primary zone". I wanted to
highlight that I'm still facing the issue and urgently need your guidance to
resolve it.
I appreciate your attention to this matter.
Th
Hi,
We have 2 clusters (v18.2.1) primarily used for RGW which has over 2+ billion
RGW objects. They are also in multisite configuration totaling to 2 zones and
we've got around 2 Gbps of bandwidth dedicated (P2P) for the multisite traffic.
We see that using "radosgw-admin sync status" on the zon
Hi Eugen,
We are planning to build a cluster with an erasure-coded (EC) pool to save some
disk space. For that we have experimented with compression settings on the RBD
pool using the following parameters:
On pool we have set below parameters:
Compression mode: Aggressive
Compression type: lz4
Hi,
We are considering BlueStore compression test in our cluster. For this we have
created rbd image on our EC pool.
While we are executing "ceph daemon osd.X perf dump | grep -E
'(compress_.*_count|bluestore_compressed_)'", we are not locate below
parameters, even we tried with ceph tell comm
Hi All,
I just wanted to quick follow-up on my previous mail about "Unable to execute
radosgw command using cephx users on client side". I wanted to highlight that
I'm still facing the issue and urgently need your guidance to resolve it.
I appreciate your attention to this matter.
Thanks,
Saif
Hello,
In our Ceph cluster we encountered issues while attempting to execute
"radosgw-admin" command on client side using cephx user having read only
permission. Whenever we are executing "radosgw-admin user list" command it is
throwing an error.
"ceph version 18.2.1 (7fe91d5d5842e04be3b4f514
Hello,
We've been using Ceph for managing our storage infrastructure, and we recently
upgraded to the latest version (Ceph v18.2.1 "reef"). However, We've noticed
that the "refresh interval" option seems to be missing in the dashboard, and we
are facing challenges with monitoring our cluster in