On Mon, Jun 26, 2017 at 2:00 AM Mayank Kumar wrote:
> Hi Ceph Users
> I am relatively new to Ceph and trying to Provision CEPH RBD Volumes using
> Kubernetes.
>
> I would like to know what are the best practices for hosting a multi
> tenant CEPH cluster. Specifically i have the following question
On Mon, Jun 26, 2017 at 2:55 PM, Mayank Kumar wrote:
> Thanks David, few more questions:-
> - Is there a way to limit the capability of the keyring which is used to
> map/unmap/lock to only allow those operations and nothing else using that
> specific keyring
Since RBD is basically just a collect
Thanks David, few more questions:-
- Is there a way to limit the capability of the keyring which is used to
map/unmap/lock to only allow those operations and nothing else using that
specific keyring
- For a single pool, is there a way to generate multiple keyrings where a
rbd cannot be mapped by te
I don't know specifics on Kubernetes or creating multiple keyrings for
servers, so I'll leave those for someone else. I will say that if you are
kernel mapping your RBDs, then the first tenant to do so will lock the RBD
and no other tenant can map it. This is built into Ceph. The original
tenant
Hi Ceph Users
I am relatively new to Ceph and trying to Provision CEPH RBD Volumes using
Kubernetes.
I would like to know what are the best practices for hosting a multi tenant
CEPH cluster. Specifically i have the following questions:-
- Is it ok to share a single Ceph Pool amongst multiple tena