I've created an upstream ticket https://tracker.ceph.com/issues/73709

Am Mo., 3. Nov. 2025 um 17:13 Uhr schrieb Boris <[email protected]>:

> yes, via ceph orch.
>
> ---
> service_type: rgw
> service_id: eu-central-lz
> service_name: rgw.eu-central-lz
> placement:
>   count_per_host: 1
>   label: rgw
> spec:
>   config:
>     debug_rgw: 0
>     rgw_dns_name: s3.eu-central-lz.tld
>     rgw_dns_s3website_name: s3-website.eu-central-lz.tld
>     rgw_keystone_token_cache_size: 100000
>     rgw_thread_pool_size: 512
>   rgw_frontend_port: 7480
>   rgw_frontend_type: beast
>   rgw_realm: ovh
>   rgw_zone: eu-central-lz
>   rgw_zonegroup: eu-central-lz
>
> Am Mo., 3. Nov. 2025 um 17:09 Uhr schrieb Anthony D'Atri <
> [email protected]>:
>
>> How is your RGW service deployed?  ceph orch?  Something else?
>>
>> On Nov 3, 2025, at 10:56 AM, Boris <[email protected]> wrote:
>>
>> Hi Anthony,
>> here are the config values we've set or with their defaults. There is
>> no rgw_keystone_token_cache_ttl (neither in the documentation, nor can I
>> set it via ceph config set client.rgw rgw_keystone_token_cache_ttl 3600):
>>
>> ~# ceph config show-with-defaults rgw.rgw1 | grep rgw_keystone | column -t
>> rgw_keystone_accepted_admin_roles            default
>>
>> rgw_keystone_accepted_roles                  objectstore_operator
>>       mon
>> rgw_keystone_admin_domain                    default
>>        mon
>> rgw_keystone_admin_password                  yyyyyyyy
>>       mon
>> rgw_keystone_admin_password_path             default
>>
>> rgw_keystone_admin_project                   services
>>       mon
>> rgw_keystone_admin_tenant                    default
>>
>> rgw_keystone_admin_token                     default
>>
>> rgw_keystone_admin_token_path                default
>>
>> rgw_keystone_admin_user                      xxxxxxx
>>        mon
>> rgw_keystone_api_version                     3
>>        mon
>> rgw_keystone_barbican_domain                 default
>>
>> rgw_keystone_barbican_password               default
>>
>> rgw_keystone_barbican_project                default
>>
>> rgw_keystone_barbican_tenant                 default
>>
>> rgw_keystone_barbican_user                   default
>>
>> rgw_keystone_expired_token_cache_expiration  3600
>>       default
>> rgw_keystone_implicit_tenants                false
>>        default
>> rgw_keystone_service_token_accepted_roles    admin
>>        default
>> rgw_keystone_service_token_enabled           false
>>        default
>> rgw_keystone_token_cache_size                100000
>>       mon         <-- i've set this to test if this solves the problem, but
>> this is the default value
>> rgw_keystone_url                             https://auth.tld
>>         mon
>> rgw_keystone_verify_ssl                      true
>>       default
>>
>>
>>
>> Am Mo., 3. Nov. 2025 um 16:40 Uhr schrieb Anthony D'Atri <
>> [email protected]>:
>>
>>> Check the values of rgw_keystone_token_cache_size and
>>> rgw_keystone_token_cache_ttl and other rgw_keystone options.
>>>
>>> I've seen at least one deployment tool that disabled Keystone caching
>>> for dev purposes, but leaked that into the release code, which deployed RGW
>>> with Rook with a configmap override.
>>>
>>>
>>> > On Nov 3, 2025, at 9:52 AM, Boris <[email protected]> wrote:
>>> >
>>> > Hi,
>>> > I am currently debugging a problem that the radosgw keystone token
>>> cache
>>> > seems not to work properly. Or at all. I tried to debug it and
>>> attached the
>>> > rgw_debug log set to 10. I've truncated to only show the part from "No
>>> > stored secret string, cache miss" until the request is done.
>>> >
>>> > The failed request hits a rate limit on the keystone which currently
>>> takes
>>> > around 2k answered requests per minute.
>>> > Any ideas what I did wrong?
>>> >
>>> > * All requests were done within 10 seconds and were only an ls to show
>>> > buckets.
>>> > * This particular RGW only took my requests during testing.
>>> > * We didn't set any timeouts or special cache configs in ceph
>>> > * system time is correct
>>> >
>>> >
>>> > First request worked instantly:
>>> >
>>> > req 8122732607072897744 0.106001295s s3:list_buckets No stored secret
>>> > string, cache miss
>>> > [4.0K blob data]
>>> > req 8122732607072897744 0.315003842s s3:list_buckets s3 keystone:
>>> validated
>>> > token: 8144848695793469:user-9XGYcbFNUVTQ expires: 1762266594
>>> > req 8122732607072897744 0.315003842s s3:list_buckets cache get:
>>> >
>>> name=eu-central-lz.rgw.meta+users.uid+a13f0472be744104ad1f64bb2855cdee$a13f0472be744104ad1f64bb2855cdee
>>> > : hit (negative entry)
>>> > req 8122732607072897744 0.315003842s s3:list_buckets cache get:
>>> > name=eu-central-lz.rgw.meta+users.uid+a13f0472be744104ad1f64bb2855cdee
>>> :
>>> > hit (requested=0x13, cached=0x13)
>>> > req 8122732607072897744 0.315003842s s3:list_buckets normalizing
>>> buckets
>>> > and tenants
>>> > req 8122732607072897744 0.315003842s s->object=<NULL> s->bucket=
>>> > req 8122732607072897744 0.315003842s s3:list_buckets init permissions
>>> > req 8122732607072897744 0.315003842s s3:list_buckets cache get:
>>> > name=eu-central-lz.rgw.meta+users.uid+a13f0472be744104ad1f64bb2855cdee
>>> :
>>> > hit (requested=0x13, cached=0x13)
>>> > req 8122732607072897744 0.315003842s s3:list_buckets recalculating
>>> target
>>> > req 8122732607072897744 0.315003842s s3:list_buckets reading
>>> permissions
>>> > req 8122732607072897744 0.315003842s s3:list_buckets init op
>>> > req 8122732607072897744 0.315003842s s3:list_buckets verifying op mask
>>> > req 8122732607072897744 0.315003842s s3:list_buckets verifying op
>>> > permissions
>>> > req 8122732607072897744 0.315003842s s3:list_buckets verifying op
>>> params
>>> > req 8122732607072897744 0.315003842s s3:list_buckets pre-executing
>>> > req 8122732607072897744 0.315003842s s3:list_buckets check rate
>>> limiting
>>> > req 8122732607072897744 0.315003842s s3:list_buckets executing
>>> > req 8122732607072897744 0.315003842s s3:list_buckets completing
>>> > req 8122732607072897744 0.315003842s cache get:
>>> > name=eu-central-lz.rgw.log++script.postrequest. : hit (negative entry)
>>> > req 8122732607072897744 0.315003842s s3:list_buckets op status=0
>>> > req 8122732607072897744 0.315003842s s3:list_buckets http status=200
>>> > ====== req done req=0x74659e51b6f0 op status=0 http_status=200
>>> > latency=0.315003842s ======
>>> >
>>> > 2nd request failed
>>> >
>>> > req 10422983006485317789 0.061000749s s3:list_buckets cache get:
>>> >
>>> name=eu-central-lz.rgw.meta+users.keys+05917cf2ee9d4fdea8baf6a3348ca33a :
>>> > hit (negative entry)
>>> > req 10422983006485317789 0.061000749s s3:list_buckets error reading
>>> user
>>> > info, uid=05917cf2ee9d4fdea8baf6a3348ca33a can't authenticate
>>> > req 10422983006485317789 0.061000749s s3:list_buckets Failed the auth
>>> > strategy, reason=-5
>>> > failed to authorize request
>>> > WARNING: set_req_state_err err_no=5 resorting to 500
>>> > req 10422983006485317789 0.061000749s cache get:
>>> > name=eu-central-lz.rgw.log++script.postrequest. : hit (negative entry)
>>> > req 10422983006485317789 0.061000749s s3:list_buckets op status=0
>>> > req 10422983006485317789 0.061000749s s3:list_buckets http status=500
>>> > ====== req done req=0x74659e51b6f0 op status=0 http_status=500
>>> > latency=0.061000749s ======
>>> >
>>> > 3rd requests went through again
>>> >
>>> > req 13123970335019889535 0.000000000s s3:list_buckets No stored secret
>>> > string, cache miss
>>> > [250B blob data]
>>> > req 13123970335019889535 0.204002500s s3:list_buckets s3 keystone:
>>> > validated token: 8144848695793469:user-9XGYcbFNUVTQ expires: 1762266602
>>> > req 13123970335019889535 0.204002500s s3:list_buckets cache get:
>>> >
>>> name=eu-central-lz.rgw.meta+users.uid+a13f0472be744104ad1f64bb2855cdee$a13f0472be744104ad1f64bb2855cdee
>>> > : hit (negative entry)
>>> > req 13123970335019889535 0.204002500s s3:list_buckets cache get:
>>> > name=eu-central-lz.rgw.meta+users.uid+a13f0472be744104ad1f64bb2855cdee
>>> :
>>> > hit (requested=0x13, cached=0x13)
>>> > req 13123970335019889535 0.204002500s s3:list_buckets normalizing
>>> buckets
>>> > and tenants
>>> > req 13123970335019889535 0.204002500s s->object=<NULL> s->bucket=
>>> > req 13123970335019889535 0.204002500s s3:list_buckets init permissions
>>> > req 13123970335019889535 0.204002500s s3:list_buckets cache get:
>>> > name=eu-central-lz.rgw.meta+users.uid+a13f0472be744104ad1f64bb2855cdee
>>> :
>>> > hit (requested=0x13, cached=0x13)
>>> > req 13123970335019889535 0.204002500s s3:list_buckets recalculating
>>> target
>>> > req 13123970335019889535 0.204002500s s3:list_buckets reading
>>> permissions
>>> > req 13123970335019889535 0.204002500s s3:list_buckets init op
>>> > req 13123970335019889535 0.204002500s s3:list_buckets verifying op mask
>>> > req 13123970335019889535 0.204002500s s3:list_buckets verifying op
>>> > permissions
>>> > req 13123970335019889535 0.204002500s s3:list_buckets verifying op
>>> params
>>> > req 13123970335019889535 0.204002500s s3:list_buckets pre-executing
>>> > req 13123970335019889535 0.204002500s s3:list_buckets check rate
>>> limiting
>>> > req 13123970335019889535 0.204002500s s3:list_buckets executing
>>> > req 13123970335019889535 0.204002500s s3:list_buckets completing
>>> > req 13123970335019889535 0.204002500s cache get:
>>> > name=eu-central-lz.rgw.log++script.postrequest. : hit (negative entry)
>>> > req 13123970335019889535 0.204002500s s3:list_buckets op status=0
>>> > req 13123970335019889535 0.204002500s s3:list_buckets http status=200
>>> > ====== req done req=0x74659e51b6f0 op status=0 http_status=200
>>> > latency=0.204002500s ======
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend
>>> im
>>> > groüen Saal.
>>> > _______________________________________________
>>> > ceph-users mailing list -- [email protected]
>>> > To unsubscribe send an email to [email protected]
>>>
>>>
>>
>> --
>> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
>> groüen Saal.
>>
>>
>>
>
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to