I think that that information is located in the ceph configuration database. Which is edited via the "ceph config set" command and which should be readable via the "ceph config get" command and probably via the config browser in the Ceph dashboard.

As I mentioned earlier, /etc/ceph doesn't carry much data anymore. The config database is the preferred location. The only stuff in my own /etc/ceph/config are the ceph.conf file identifying the filesystem and a couple of client key files.

The config database is shared by all ceph nodes, so it avoids the need to replicate shared info manually.

On 6/10/25 17:18, Albert Shih wrote:
Le 10/06/2025 à 16:46:28+0200, Albert Shih a écrit
Hi,

I'm currently running ceph 18.2.7 and I try to connect my RGW to my LDAP

After many hours to battle with that I end up to turn on every debug flag I
can find.

It seem the RGW try to bind anonymously to my ldap server, here the log on
my ldap server (openldap)

   Jun 10 16:32:02 ldaps2-m2 slapd[453]: conn=836633 op=1 SRCH base="dc=obspm,dc=fr" scope=2 
deref=0 
filter="(&(&(objectClass=inetOrgPerson)(memberOf=cn=s3storage,ou=groups,ou=services_centraux,dc=obspm,dc=fr))(uid=jas))"
   Jun 10 16:32:02 ldaps2-m2 slapd[453]: conn=836633 op=1 SRCH attr=uid
   Jun 10 16:32:02 ldaps2-m2 slapd[453]: ==> limits_get: conn=836633 op=1 
self="[anonymous]" this="dc=obspm,dc=fr"


We don't want to allow the anonymous bind here.

I set

   ceph config set client.rgw rgw_ldap_binddn 
"cn=s3storage,ou=dsa,ou=services_centraux,dc=obspm,dc=fr"
   ceph config set client.rgw rgw_ldap_secret "/etc/ceph/ldappw.txt"
   ceph config set client.rgw rgw_ldap_searchdn "dc=obspm,dc=fr"
I think I find a clue.

Inside the container (I'm using podman) I don't see (podman inspect) any 
bind/mount of the
file /etc/ceph/ldapww.txt. So I'm guessing RGW don't see any password and
rollback to anonymous bind.

Any clue why  ? or better...how to fix that ?

Regards


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to