On Fri, Aug 10, 2018 at 8:25 AM Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:

> Hi,
>
>
> On 08/10/2018 03:10 PM, Matthew Pounsett wrote:
>
> *snipsnap*
> >> advisable to put these databases on SSDs. You can share one SSD for
> several
> >> OSDs (e.g. by creating partitions), but keep in mind that the failure of
> >> one of these SSDs also renders the OSD content useless. Do not use
> consumer
> >> grade SSDs. There are many discussions on the mailing list about SSDs,
> just
> >> search the archive.
> >>
> > You're referring to the journal, here?  Yes, I'd read the Hardware
> > Recommendations document that suggests that.  It doesn't seem to suggest
> > that partitioning of the SSD is necessary, though.. but possible if
> > desired.  I haven't (yet) found any recommendations on sizing an SSD,
> and I
> > wonder if I can take that to mean that the journal is so small that size
> is
> > rarely a concern.
>
> For the old filestore OSDs it is the journal. The newer bluestore OSDs
> (default since luminous release) use a separate partition for their
> key-value database. Requirements for both are similar. But I haven't
> been able to find a good advice for the optimal database partition size
> yet.
>
> *snipsnap*
>
> >> CephFS also does not perform any mds side authorization based on unix
> >> permissions (AFAIK). The access to the mds (and thus the filesystem) is
> >> controlled by a shared ceph user secret. You do not have the ability to
> use
> >> kerberos or server side unix group permissions checks. And you need to
> >> trust the clients since you need to store the ceph user secret on the
> >> client.
> >>
> > I think we're probably okay here.  Reads and writes are split up into
> > separate machines, and the ones that read have NFS mounts set read-only.
> > We don't currently have a requirement to allow some users on a read host
> to
> > be prevented from reading some data.  But, let me speculate for a moment
> on
> > some hypothetical future where we've got different access requirements
> for
> > different data sets.  If we can't restrict access to files by unix
> > user/group permissions, would it make sense to run multiple clusters on
> the
> > same hardware, having some OSDs on a host participate in one cluster,
> while
> > other OSDs participate in a second, and share them out as separate CephFS
> > mounts?  Access could be controlled above the mount point in the
> > filesystem, that way.
> I was referring to the instance that is checking the permissions. In
> case of CephFS, this check is only done on the client (either within the
> kernel or with the fuse implementation). As a consequence you have to
> trust your clients to perform the check correctly. A rogue client may
> access data without ant further check based on standard unix
> permissions. With NFS, the NFS server itself may be aware of unix group
> memberships (e.g. using ldap/pam/whatever), ignore whatever the client
> is reporting and do its own checks (--manage-gids option for standard
> linux kernel nfs server if I remember correctly). This is not possible
> with CephFS.
>

Just a note, the MDS performs permissions checks now and has for a while
(and you can specify in CephX which uids/gids a client mount is allowed to
act as).
The OSDs (storing file data) are still oblivious to it all, though, so it's
just metadata access that is properly checked. The standard response if you
need to segregate file data is to specify different namespaces for each
client (marked in the CephFS layout so the namespace is used at all, and
then restricting each client's CephX OSD permissions to only their
namespace).
-Greg


>
> You can also deploy multiple filesystems within a cluster, and use
> different access keys for them. Each filesystem will require its own MDS
> instance(s), but you can isolate them quite well. You can also have one
> filesystem and allow groups of clients access to certain subdirectories
> only.
>
>
> Regards,
> Burkhard
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to