Thanks David, few more questions:-
- Is there a way to limit the capability of the keyring which is used to
map/unmap/lock to only allow those operations and nothing else using that
specific keyring
- For a single pool, is there a way to generate multiple keyrings where a
rbd cannot be mapped by tenant A using keyring A, if it was mapped using a
different keyring created only for tenant B. I understand, that tenant A
has to unlock and unmap it, which would happen during the garbage
collection phase in our deployment.
- For us ,these are internal customers. Is 12-13 pools too much. I was
thinking if this scales upto 100, we are good.
- I heard something about ceph namespaces which would scale for different
customers. Isnt that implemented yet ? I couldnt find a any documentation
for it ?





On Mon, Jun 26, 2017 at 7:12 AM, David Turner <drakonst...@gmail.com> wrote:

> I don't know specifics on Kubernetes or creating multiple keyrings for
> servers, so I'll leave those for someone else.  I will say that if you are
> kernel mapping your RBDs, then the first tenant to do so will lock the RBD
> and no other tenant can map it.  This is built into Ceph.  The original
> tenant would need to unmap it for the second to be able to access it.  This
> is different if you are not mapping RBDs and just using librbd to deal with
> them.
>
> Multiple pools in Ceph are not free.  Pools are a fairly costly resource
> in Ceph because data for pools is stored in PGs, the PGs are stored and
> distributed between the OSDs in your cluster, and the more PGs an OSD has
> the more memory requirements that OSD has.  It does not scale infinitely.
> If you are talking about one Pool per customer on a dozen or less
> customers, then it might work for your use case, but again it doesn't scale
> to growing the customer base.
>
> RBD map could be run remotely via SSH, but that isn't what you were asking
> about.  I don't know of any functionality that allows you to use a keyring
> on server A to map an RBD on server B.
>
> "Ceph Statistics" is VERY broad.  Are you talking IOPS, disk usage,
> throughput, etc?  disk usage is incredibly simple to calculate, especially
> if the RBD has object-map enabled.  A simple rbd du rbd_name would give you
> the disk usage per RBD and return in seconds.
>
> On Mon, Jun 26, 2017 at 2:00 AM Mayank Kumar <krmaya...@gmail.com> wrote:
>
>> Hi Ceph Users
>> I am relatively new to Ceph and trying to Provision CEPH RBD Volumes
>> using Kubernetes.
>>
>> I would like to know what are the best practices for hosting a multi
>> tenant CEPH cluster. Specifically i have the following questions:-
>>
>> - Is it ok to share a single Ceph Pool amongst multiple tenants ?  If
>> yes, how do you guarantee that volumes of one Tenant are not
>>  accessible(mountable/mapable/unmappable/deleteable/mutable) to other
>> tenants ?
>> - Can a single Ceph Pool have multiple admin and user keyrings generated
>> for rbd create and rbd map commands ? This way i want to assign different
>> keyrings to each tenant
>>
>> - can a rbd map command be run remotely for any node on which we want to
>> mount RBD Volumes or it must be run from the same node on which we want to
>> mount ? Is this going to be possible in the future ?
>>
>> - In terms of ceph fault tolerance and resiliency, is one ceph pool per
>> customer a better design or a single pool must be shared with mutiple
>> customers
>> - In a single pool for all customers, how can we get the ceph statistics
>> per customer ? Is it possible to somehow derive this from the RBD volumes ?
>>
>> Thanks for your responses
>> Mayank
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to