I'm sure at some scale on some hardware it is possible to run into
bottlenecks, but no reported issues with scaling come to mind.

CephX keys are durably stored in the monitor's RocksDB instance, which
it uses to store all of its data. This scales well but not infinitely,
but I don't think we've run into monitor issues with scaling after
some early teething issues and switching from LevelDB to RocksDB.
Sometimes we notice stuff on OSDs but it chiefly involves spillover
onto slower devices.
Ephemerally, these keys are used to establish monitor sessions and to
obtain the service keys which let monitor clients connect to the other
Ceph servers. But at that point they're attached to a client network
session, which is also technically a scaling limit.

I thought I was going to have more than two categories, but that's all
I've got right now. I feel like there was one other place it could
come up when I was theorizing about this for some particular design
architecture — oh, I guess maybe if you have very large keyring files
to search through on the client side? But that would be unusual,
so...yeah, not sure there's much to worry about.
If there is an issue, it's definitely closer to 1 million keys than to
1 thousand.
-Greg

On Thu, Jun 5, 2025 at 8:29 PM James Tocknell <james.tockn...@mq.edu.au> wrote:
>
> Hi All
>
> As far as I can see, there is no guidance on the number of cephx keys that 
> can be in use at one time.
> Is there a number at which ceph becomes much slower e.g. 100, 10000, 1000000?
> I'm wondering how best to manage keys across many clients (let's say 1000s 
> for now), most of which won't actually be connected at the same time.
>
> Regards
> James
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to