[ceph-users] Have a problem with haproxy/keepalived/ganesha/docker

2024-04-11 Thread ruslan . nurabayev
Hello! I've install my 5-node CEPH cluster next install NFS server by command: ceph nfs cluster create nfshacluster 5 --ingress --virtual_ip 192.168.171.48/26 --ingress-mode haproxy-protocol. I don't understand fully how this must be works but when i stop NFS daemon even on one of this nodes i've

[ceph-users] Re: Migrating from S3 to Ceph RGW (Cloud Sync Module)

2024-04-11 Thread Vladimir Sigunov
Hello, I used to use rclone for data synchronization between 2 ceph clusters and for a directional sync from AWS to Ceph. In general, rclone is a really good and reliable pice of software, but could be slow with large amount of syncing objects. Large - 10^6+ objects. As a disclaimer - my experi

[ceph-users] Re: MDS Behind on Trimming...

2024-04-11 Thread Nigel Williams
On Wed, 10 Apr 2024 at 14:01, Xiubo Li wrote: > > I assume if this fix is approved and backported it will then appear in > > like 18.2.3 or something? > > > Yeah, it will be backported after being well tested. > We believe we are being bitten by this bug too, looking forward to the fix. thanks.

[ceph-users] Re: Migrating from S3 to Ceph RGW (Cloud Sync Module)

2024-04-11 Thread James McClune
Thanks cbodley for the clarification. I'll definitely look more into rclone Gilles. I was actually putting together a POC with that too, in case my understanding of the cloud sync module was wrong. Thanks for the heads-up on the other stuff too :) Best Regards, Jimmy On April 11, 2024 5:29:30

[ceph-users] Re: Migrating from S3 to Ceph RGW (Cloud Sync Module)

2024-04-11 Thread Gilles Mocellin
Le jeudi 11 avril 2024, 23:44:05 CEST Gilles Mocellin a écrit : > Le jeudi 11 avril 2024, 23:29:30 CEST Casey Bodley a écrit : > > > unfortunately, this cloud sync module only exports data from ceph to a > > remote s3 endpoint, not the other way around: > > > > "This module syncs zone data to a r

[ceph-users] Re: Migrating from S3 to Ceph RGW (Cloud Sync Module)

2024-04-11 Thread Gilles Mocellin
Le jeudi 11 avril 2024, 23:29:30 CEST Casey Bodley a écrit : > unfortunately, this cloud sync module only exports data from ceph to a > remote s3 endpoint, not the other way around: > > "This module syncs zone data to a remote cloud service. The sync is > unidirectional; data is not synced back fr

[ceph-users] Re: Migrating from S3 to Ceph RGW (Cloud Sync Module)

2024-04-11 Thread Casey Bodley
unfortunately, this cloud sync module only exports data from ceph to a remote s3 endpoint, not the other way around: "This module syncs zone data to a remote cloud service. The sync is unidirectional; data is not synced back from the remote zone." i believe that rclone supports copying from one s

[ceph-users] Migrating from S3 to Ceph RGW (Cloud Sync Module)

2024-04-11 Thread James McClune
Hello Ceph User Community, I currently have a large Amazon S3 environment with terabytes of data spread over dozens of buckets. I'm looking to migrate from Amazon S3 to an on-site Ceph cluster using the RGW. I'm trying to figure out the most efficient way to achieve this. Looking through the docum

[ceph-users] Re: MDS Behind on Trimming...

2024-04-11 Thread Erich Weiler
Or... Maybe the fix will first appear in the "centos-ceph-reef-test" repo that I see? Is that how RedHat usually does it? On 4/11/24 10:30, Erich Weiler wrote: I guess we are specifically using the "centos-ceph-reef" repository, and it looks like the latest version in that repo is 18.2.2-1.el

[ceph-users] Re: Client kernel crashes on cephfs access

2024-04-11 Thread Ilya Dryomov
On Mon, Apr 8, 2024 at 10:22 AM Marc wrote: > I have a guaranteed crash + reboot with el7 - nautilus accessing a snapshot. > > rbd snap ls vps-xxx -p rbd > rbd map vps-xxx@vps-xxx.bak1 -p rbd > > some lvm stuff like this (pvscan --cache; pvs; lvchange -a y VGxxx/LVyyy) > > mount -o ro /dev/mapper/

[ceph-users] Re: MDS Behind on Trimming...

2024-04-11 Thread Erich Weiler
I guess we are specifically using the "centos-ceph-reef" repository, and it looks like the latest version in that repo is 18.2.2-1.el9s. Will this fix appear in 18.2.2-2.el9s or something like that? I don't know how often the release cycle updates the repos...? On 4/11/24 09:40, Erich Weiler

[ceph-users] Re: MDS Behind on Trimming...

2024-04-11 Thread Erich Weiler
I have raised one PR to fix the lock order issue, if possible please have a try to see could it resolve this issue. That's great! When do you think that will be available? Thank you!  Yeah, this issue is happening every couple days now. It just happened again today and I got more MDS dumps. 

[ceph-users] Strange placement groups warnings

2024-04-11 Thread Dmitriy Maximov
Dear Ceph experts, recently we have upgraded our ceph cluster from octopus (15.2.17) to pacific (16.2.14 and then to 16.2.15). Just after upgrade warnings that all (except device_health_metrics pool) our pools have too many placement groups appeared. This warning looks like generated by autosc

[ceph-users] Re: Issue about "ERROR: S3 Temporary Error: Request failed for: /. Please try again later"

2024-04-11 Thread king .
-- DEBUG: signature-v4 headers: {'x-amz-content-sha256': u'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': u'AWS4-HMAC-SHA256 Credential=E9BJAC6QKLTOKVJR4TZC/202404

[ceph-users] Re: Issue about "ERROR: S3 Temporary Error: Request failed for: /. Please try again later"

2024-04-11 Thread elite_stu
Thanks for your reply! but xx is the actual ip of my nodes, I mapped it on internet, so i use xx to replace the public ip. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Issue about "ERROR: S3 Temporary Error: Request failed for: /. Please try again later"

2024-04-11 Thread Janne Johansson
Den tors 11 apr. 2024 kl 15:55 skrev : > > I have mapped port 32505 to 23860, however when connect via s3cmd it fails > with "ERROR: S3 Temporary Error: Request failed for: /. Please try again > later." . > has anyone ecounted same issue? > > [root@vm-04 ~]# s3cmd ls > WARNING: Retrying failed r

[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-04-11 Thread Ralph Boehme
Hi John On 4/11/24 15:55, John Mulligan wrote: I haven't done much perf testing myself, as I usually get by with a "few vms on a laptop" approach. There may be some opportunities to run things in the ceph sepia lab but I would have to ask around first as I typically only use it to run teuthology

[ceph-users] Re: Issue about "ERROR: S3 Temporary Error: Request failed for: /. Please try again later"

2024-04-11 Thread elite_stu
1b7852b855', 'x-amz-date': '20240411T140729Z', 'Authorization': u'AWS4-HMAC-SHA256 Credential=E9BJAC6QKLTOKVJR4TZC/20240411/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=f830a69ed774e99312bab9137bcf0eeb75119d82

[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-04-11 Thread John Mulligan
On Thursday, April 11, 2024 9:35:28 AM EDT Ralph Boehme wrote: > Hi John, > > I've finally came around to finish the database client driver for Samba > to talk to Ceph via Python librados and implement some changes that do > improve performance compared to the one from Samuel I used last year: >

[ceph-users] Issue about "ERROR: S3 Temporary Error: Request failed for: /. Please try again later"

2024-04-11 Thread elite_stu
I have mapped port 32505 to 23860, however when connect via s3cmd it fails with "ERROR: S3 Temporary Error: Request failed for: /. Please try again later." . has anyone ecounted same issue? [root@vm-04 ~]# s3cmd ls WARNING: Retrying failed request: / ('') WARNING: Waiting 3 sec... WARNING: Retr

[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-04-11 Thread Ralph Boehme
Hi John, I've finally came around to finish the database client driver for Samba to talk to Ceph via Python librados and implement some changes that do improve performance compared to the one from Samuel I used last year:

[ceph-users] Re: Cephadm host keeps trying to set osd_memory_target to less than minimum

2024-04-11 Thread Mads Aasted
Hi Adam. Just tried to extend the hosts memory to 48gb, and it stopped throwing the error, and set it to 3.something gb instead Thank you so much for you time and explainations On Tue, Apr 9, 2024 at 9:30 PM Adam King wrote: > The same experiment with the mds daemons pulling 4GB instead of the

[ceph-users] Have a problem with haproxy/keepalived/ganesha/docker

2024-04-11 Thread Ruslan Nurabayev
Hello! I've installed my 5-node CEPH cluster next install NFS server by command: ceph nfs cluster create nfshacluster 5 --ingress --virtual_ip 192.168.171.48/26 --ingress-mode haproxy-protocol. I don't understand fully how this must be works but when i stop NFS daemon even on one of this nodes I'

[ceph-users] Re: RGW/Lua script does not show logs

2024-04-11 Thread Thomas Bennett
Hi Lee, RGWDebugLog logs at the debug level. Do you have the correct logging levels on your rados gateways? Should be 20. Cheers, Tom On Mon, 8 Apr 2024 at 23:31, wrote: > Hello, I wrote a Lua script in order to retrieve RGW logs such as bucket > name, bucket owner, etc. > However, when I appl

[ceph-users] Re: A couple OSDs not starting after host reboot

2024-04-11 Thread xu chenhui
Igor Fedotov wrote: > Hi chenhui, > > there is still a work in progress to support multiple labels to avoid > the issue (https://github.com/ceph/ceph/pull/55374). But this is of > little help for your current case. > > If your disk is fine (meaning it's able to read/write block at offset 0) >