[ceph-users] Re: Help needed: s3cmd set ACL command possess S3 error: 400 (InvalidArgument) in squid ceph version.

2025-01-20 Thread Saif Mohammad
Thanks Stephan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Help needed: s3cmd set ACL command possess S3 error: 400 (InvalidArgument) in squid ceph version.

2025-01-20 Thread Saif Mohammad
Hello Community, We are trying to set ACL for one of the objects by s3cmd tool within the buckets to be public by using the command as follows but we are unable to set it in squid ceph version, however the same was done in the reef version, we were successfully able to set it public. Please let

[ceph-users] Unable to mount NFS share NFSv3 on windows client.

2024-10-20 Thread Saif Mohammad
Hello Everyone, We are encountering issues while trying to mount NFSv3-based exports on a Windows 11 client. We made the necessary changes to the configuration file to enable NFSv3 exports, but unable to mount. Below are the details: We created the custom config file named export.conf and appli

[ceph-users] Re: Node Exporter keep failing while upgrading cluster in Air-gapped ( isolated environment ).

2024-07-16 Thread Saif Mohammad
Hello Adam, Thanks for the prompt response. We have below image in private-registry for node-exporter. 192.168.1.10:5000/prometheus/node-exporter v1.5.0 0da6a335fe13 19 months ago 22.5MB But upon ceph upgrade, we are getting the mentioned image ( quay.io/prometheus/node-expo

[ceph-users] Node Exporter keep failing while upgrading cluster in Air-gapped ( isolated environment ).

2024-07-15 Thread Saif Mohammad
Hello, We are facing an issue with node-exporter entering an error state while upgrading our cluster in an air-gapped environment. Specifically, we are upgrading from quincyv17.2.0 to reefv18.2.2. To facilitate this upgrade, we have set up a custom repository on a separate machine within the s

[ceph-users] Safe method to perform failback for RBD on one way mirroring.

2024-05-27 Thread Saif Mohammad
Hello Everyone We have Clusters in production with the following configuration: Cluster-A : quincy v17.2.5 Cluster-B : quincy v17.2.5 All images in a pool have the snapshot feature enabled and are mirrored. Each site has 3 daemons. We're testing disaster recovery with one-way mirroring in our b

[ceph-users] Re: RGW multisite slowness issue due to the "304 Not Modified" responses on primary zone

2024-05-01 Thread Saif Mohammad
Hi Alexander, We have configured the parameters in our infrastructure to fix the issue, and despite tuning them or even set it to the higher levels, the issue still persists. We have shared the latency between the DC and DR site for your reference. Please advise on alternative solutions to res