Okay... I forgot that!

Thank you both Gregory & Michael !

I had to set all layout options to make it work :

cephfs /mnt/ceph set_layout -p 4 -s 4194304 -u 4194304 -c 1



On 02/28/2014 04:52 PM, Michael J. Kidd wrote:
> Seems that you may also need to tell CephFS to use the new pool
> instead of the default..
>
> After CephFS is mounted, run:
> # cephfs /mnt/ceph set_layout -p 4
>
>
> Michael J. Kidd
> Sr. Storage Consultant
> Inktank Professional Services
>
>
> On Fri, Feb 28, 2014 at 9:12 AM, Sage Weil <s...@inktank.com
> <mailto:s...@inktank.com>> wrote:
>
>     Hi Florent,
>
>     It sounds like the capability for the user you are authenticating
>     as does
>     not have access to the new OSD data pool.  Try doing
>
>      ceph auth list
>
>     and see if there is an osd cap that mentions the data pool but not
>     the new
>     pool you created; that would explain your symptoms.
>
>     sage
>
>     On Fri, 28 Feb 2014, Florent Bautista wrote:
>
>     > Hi all,
>     >
>     > Today I'm testing CephFS with client-side kernel drivers.
>     >
>     > My installation is composed of 2 nodes, each one with a monitor
>     and an OSD.
>     > One of them is also MDS.
>     >
>     > root@test2:~# ceph -s
>     >     cluster 42081905-1a6b-4b9e-8984-145afe0f22f6
>     >      health HEALTH_OK
>     >      monmap e2: 2 mons at
>     {0=192.168.0.202:6789/0,1=192.168.0.200:6789/0
>     <http://192.168.0.202:6789/0,1=192.168.0.200:6789/0>},
>     > election epoch 18, quorum 0,1 0,1
>     >      mdsmap e15: 1/1/1 up {0=0=up:active}
>     >      osdmap e82: 2 osds: 2 up, 2 in
>     >       pgmap v4405: 384 pgs, 5 pools, 16677 MB data, 4328 objects
>     >             43473 MB used, 2542 GB / 2584 GB avail
>     >                  384 active+clean
>     >
>     >
>     > I added data pool to MDS : ceph mds add_data_pool 4
>     >
>     > Then I created keyring for my client :
>     >
>     > ceph --id admin --keyring /etc/ceph/ceph.client.admin.keyring auth
>     > get-or-create client.test mds 'allow' osd 'allow * pool=CephFS'
>     mon 'allow
>     > *' > /etc/ceph/ceph.client.test.keyring
>     >
>     >
>     > And I mount FS with :
>     >
>     > mount -o
>     name=test,secret=AQC9YhBT8CE9GhAAdgDiVLGIIgEleen4vkOp5w==,noatime
>     > -t ceph 192.168.0.200,192.168.0.202:/ /mnt/ceph
>     >
>     >
>     > The client could be Debian 7.4 (kernel 3.2) or Ubuntu 13.11
>     (kernel 3.11).
>     >
>     > Mount is OK. I can write files to it. I can see files on every
>     clients
>     > mounted.
>     >
>     > BUT...
>     >
>     > Where are stored my files ?
>     >
>     > My pool stays at 0 disk usage on rados df
>     >
>     > Disk usage of OSDs never grows...
>     >
>     > What did I miss ?
>     >
>     > When client A writes a file, I got "Operation not permitted"
>     when client B
>     > reads the file, even if I "sync" FS.
>     >
>     > That sounds very strange to me, I think I missed something but I
>     don't know
>     > what. Of course, no error in logs.
>     >
>     >
>
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to