Hi,

I have two ceph clusters on 16.2.13 and one CephFS in each.
I can see only one difference between FSs: the second one FS has two data pools 
and one of the root directories is pinned to the second pool. In the first one 
FS we have only one default data pool.

And, when I do:
ceph fs authorize fs1 client.XXX /root-dir/some-dir rw
I have next permissions in first and second FSs:

[client.XXX]
        key = AQCmbp9myJ0pBhAAnoCBxvKbOfyGH5vs0g2QhQ==
        caps mds = "allow rw fsname=fs1 path=/root-dir/some-dir"
        caps mon = "allow r fsname=fs1"
        caps osd = "allow rw tag cephfs data=fs1"
and zero problem in the first FS, the user can mount and write new files.
But, on the second FS user can't write files. Operation not permitted.
In client logs I can see:

check_pool_perm on pool 15 ns  need AsFw, but no write perm
If I update caps in second FS like this:

ceph auth caps client.XXX mds 'allow rw fsname=fs1, allow rw fsname=fs1 
path=/root-dir/some-dir' mon 'allow r fsname=fs1' osd 'allow rw pool=fs1_data'
and caps will looks like:

[client.XXX]
        key = AQA2e6JmeNrWABAAoIiBstYFSr4/UZ3N1vIrHg==
        caps mds = "allow rw fsname=fs1, allow rw fsname=fs1 
path=/root-dir/some-dir"
        caps mon = "allow r fsname=fs1"
        caps osd = "allow rw pool=fs1_data"
Everything goes well.
So, I have two questions:
1. Where is the problem with "allow rw tag cephfs data=fs1", how can I debug 
this?
2. What is the difference between "allow rw pool=fs1_data"? Can I use this caps 
format instead of ceph fs authorize

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to