[ceph-users] Re: [RGW][cephadm] How to configure RGW as code and independantely of daemon names ?

2024-09-11 Thread Robert Sander
Hi, On 9/11/24 22:00, Gilles Mocellin wrote: Is there some documentation I didn't find, or is this the kind of detail only a developper can find ? It should be in these sections: https://docs.ceph.com/en/reef/rados/configuration/ceph-conf/#configuration-sections https://docs.ceph.com/en/ree

[ceph-users] Re: bluefs _allocate unable to allocate on bdev 2

2024-09-11 Thread Szabo, Istvan (Agoda)
Maybe we are running into this bug Igor? https://github.com/ceph/ceph/pull/48854 From: Szabo, Istvan (Agoda) Sent: Thursday, September 12, 2024 6:50 AM To: Ceph Users Subject: [ceph-users] Re: bluefs _allocate unable to allocate on bdev 2 This is the end of a man

[ceph-users] Re: RGW sync gets stuck every day

2024-09-11 Thread Matthew Darwin
I'm on quincy. I had lots of problems with RGW getting stuck.  Once I dedicated 1 single RGW on each side to do replication, my problems went away.  Having a cluster of RGW behind a load balancer seemed to be confusing things. I still have multiple RGW for user-facing load, but a single RGW

[ceph-users] Re: bluefs _allocate unable to allocate on bdev 2

2024-09-11 Thread Szabo, Istvan (Agoda)
This is the end of a manual compaction and can't start actually even after compaction: Meta: https://gist.github.com/Badb0yBadb0y/f918b1e4f2d5966cefaf96d879c52a6e Log: https://gist.github.com/Badb0yBadb0y/054a0cefd4a56f0236b26479cc1a5290 From: Szabo, Istvan (Agoda)

[ceph-users] bluefs _allocate unable to allocate on bdev 2

2024-09-11 Thread Szabo, Istvan (Agoda)
Hi, Since yesterday on ceph octopus it started to crash multiple osds in the cluster and I can see this error in most of the logs: 2024-09-12T06:13:35.805+0700 7f98b8b27700 1 bluefs _allocate failed to allocate 0xf0732 on bdev 1, free 0x4; fallback to bdev 2 2024-09-12T06:13:35.805+0700 7f

[ceph-users] Re: [RGW][CEPHADM] Multisite configuration and Ingress

2024-09-11 Thread Gilles Mocellin
Le mercredi 11 septembre 2024, 22:51:42 CEST Daniel Parkes a écrit : > Hi Gilles, Hi Daniel, and Thank you for your responses. > On Wed, Sep 11, 2024 at 10:33 PM Gilles Mocellin < > > gilles.mocel...@nuagelibre.org> wrote: > > Yes, I've read some stories about that in this mailing list, but,

[ceph-users] Re: [RGW][CEPHADM] Multisite configuration and Ingress

2024-09-11 Thread Daniel Parkes
Hi Gilles, On Wed, Sep 11, 2024 at 10:33 PM Gilles Mocellin < gilles.mocel...@nuagelibre.org> wrote: > Yes, I've read some stories about that in this mailing list, but, my > question > was not clear enough, I want to know what's possible to do with cephadm : > - Is it possible to create several I

[ceph-users] Re: [RGW][CEPHADM] Multisite configuration and Ingress

2024-09-11 Thread Gilles Mocellin
Yes, I've read some stories about that in this mailing list, but, my question was not clear enough, I want to know what's possible to do with cephadm : - Is it possible to create several Ingress with cephadm ? (I can't reach my test env to test until tomorrow !) - If we do, how can we configure d

[ceph-users] [RGW][CEPHADM] Multisite configuration and Ingress

2024-09-11 Thread Gilles Mocellin
Hi again, I wonder if there is a better way of configuring multisite configuration, especially zone/zonegroup endpoints : - many endpoints, pointing directly at RGW daemons - one endpoint, pointing at the load balancer VIP of Ingress If we want to separate client access from replication, is it

[ceph-users] Re: [RGW][cephadm] How to configure RGW as code and independantely of daemon names ?

2024-09-11 Thread Gilles Mocellin
Oh ! Thank you ! I'll try tomorrow. Is there some documentation I didn't find, or is this the kind of detail only a developper can find ? PS: A still which we can configure services via spec files for a DevOps / good Infra as Code approach. Very respectfully, -- Gilles Le mercredi 11 septe

[ceph-users] Re: [RGW][cephadm] How to configure RGW as code and independantely of daemon names ?

2024-09-11 Thread Wesley Dillingham
"ceph config set client.rgw some_config_key some_config_value" should apply for all rgws using default naming scheme. Respectfully, *Wes Dillingham* LinkedIn w...@wesdillingham.com On Wed, Sep 11, 2024 at 3:45 PM Gilles Mocellin < gilles.mocel...

[ceph-users] [RGW][cephadm] How to configure RGW as code and independantely of daemon names ?

2024-09-11 Thread Gilles Mocellin
Hello Cephers, In our journey to migrate to cephadm, I face a problem with RGW: We use Ceph with OpenStack, so we have to configure keystone authentification, with many rgw_keystone_* variables in /etc/ceph/ceph.conf. It's easy with ceph-ansible, and the section in ceph.conf has always the sa

[ceph-users] Re: bilog trim fails with "No such file or directory"

2024-09-11 Thread Florian Schwab
Hi, sorry forgot to add this. Ceph release: 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable) The cluster consists out of 360 OSDs and the index pool has 128 PGs and auto-resharding is enabled. Cheers, Florian > On 11. Sep 2024, at 18:18, Anthony D'Atri wrote: > > Which Ceph r

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-11 Thread Laura Flores
Thanks @Venky Shankar for looking into https://tracker.ceph.com/issues/68002. As for https://tracker.ceph.com/issues/67999, you are right- it is not cephfs related. Mistake on my part. After looking into this one, it seems to be a test issue, where we check for a "Filestore has been deprecated" w

[ceph-users] bilog trim fails with "No such file or directory"

2024-09-11 Thread Florian Schwab
Hi everyone, I hope maybe someone here has an idea what is happening here or can give some pointers how to debug it further. We currently have a bucket which has large omap objects. Following this guide (https://access.redhat.com/solutions/6450561) we are able to identify the bucket etc. $ ce

[ceph-users] Re: RGW sync gets stuck every day

2024-09-11 Thread Olaf Seibert
So we still have this rgw synchronization that gets stuck every day and about the same time. We have alerting on it, so our on-call people are getting annoyed. Summarizing: we see on the receiving end of an rgw sync something like this: # radosgw-admin sync status --rgw-realm backup

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-11 Thread Venky Shankar
Hey Laura, On Wed, Sep 11, 2024 at 12:13 AM Laura Flores wrote: > I have finished reviewing the upgrade and smoke suites. Most failures are > already known/tracked: > https://tracker.ceph.com/projects/rados/wiki/SQUID#v1920-build-3-httpstrackercephcomissues67779 > > *Upgrade: Pending check from