> 3 Requests to list all my buckets in <10 seconds.
> The 1st request showd me my buckets, then 2nd requests resulted in a 500
> error and thew 3rd showed me my buckets again.
>
> For me this currently looks like I get a "429 Too Many Requests" from the
> keystone on all the three requests that I made and I would have expected to
> see this error only on the 2nd requests.
> Weird is also line 104-109. I have no idea how the content of the
> /etc/hosts file made it into the log.
>
> The keystone user that we have in the "rgw_keystone_admin_user" is not a
> keystone admin. The people that maintain the keystone just told me "The
> user doesn't have admin and we would not grant it."
> The "rgw_s3_auth_order" is default. We didn't touch it. "sts, external,
> local"

I don't want to steal your thread, but if I could wish while someone
is in the rgw keystone cache code is if we could have a negative cache
also. We use keystone for some accounts and non-keystone on others,
and when run just like you do, if someone with a local account hammers
the rgws, then EACH attempt will ask the keystone, get told they don't
exist, then rgw checks local and finds the user. We had to bump the
specs on our keystones for this, since local-account users could make
many connections per second against our rgw cluster, so even something
like 5 or 15 or 30 seconds of negative cache so it doesn't have to ask
keystone all the time would lessen the load on them significantly. For
some reason that I can't remember now, changing the ordering didn't
work for us, so we just made separate rgws for keystone or local
accounts, with different endpoint URLs.

So I am all for the positive keystone cache getting fixed, but also
adding the ability to have a short-term negative cache when you
already got an answer from the keystone that a certain account doesn't
exist.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to