Hi all,

I'm deploying Squid.
I configure a first cephfs setup using k=6 and m=8 but this was a bad idea for my 5 nodes cluster. So I try to roll back and build a k=5 and m=3.

But it remains some occurrences of my old setup when allowing cephfs clients and I do not understand how to remove them. My previous status was:

# ceph osd pool ls
.mgr
ec-62-pool
cephfs.data-ec62-vol.meta
cephfs.data-ec62-vol.data

to remove it I ran:

# ceph fs fail data-ec62-vol
# ceph fs rm data-ec62-vol --yes-i-really-mean-it
# ceph config set mon mon_allow_pool_delete true
# ceph osd pool delete ec-62-pool ec-62-pool --yes-i-really-really-mean-it
# ceph osd pool delete cephfs.data-ec62-vol.data cephfs.data-ec62-vol.data --yes-i-really-really-mean-it # ceph osd pool delete cephfs.data-ec62-vol.meta cephfs.data-ec62-vol.meta --yes-i-really-really-mean-it
# ceph config set mon mon_allow_pool_delete false
# ceph osd pool ls
    .mgr

I then remove my erasure code profile:
# ceph osd erasure-code-profile ls
    default
    ec-62-profile-isa
# ceph osd erasure-code-profile rm ec-62-profile-isa
# ceph osd erasure-code-profile ls
    default

And the corresponding crush rule:
# ceph osd crush rule ls
    replicated_rule
    ec-62-pool
# ceph osd crush rule rm  ec-62-pool

Then I create my new config:

# ceph osd erasure-code-profile set ec-53-profile-isa k=5 m=3 crush-failure-domain=host plugin=isa
# ceph osd erasure-code-profile ls
    default
    ec-53-profile-isa
# ceph osd pool create ec-53-pool erasure ec-53-profile-isa
    pool 'ec-53-pool' created
# ceph osd pool set ec-53-pool allow_ec_overwrites true
    set pool 5 allow_ec_overwrites to true
# ceph fs volume create data
# ceph fs add_data_pool data ec-53-pool


Last I create a folder in this fs and set it to the erasure coding pool:

# ceph-fuse /mnt
# cd /mnt
# mkdir data
# setfattr -n ceph.dir.layout.pool -v ec-53-pool /mnt/data

But when I want to create the new Key for the clients with:
# ceph fs authorize data client.whitaker / r root_squash /data rw

This still rely on the old data-ec62-vol setup:

[client.whitaker]
    key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
    caps mds = "allow r fsname=data-ec62-vol, allow rw fsname=data-ec62-vol path=data, allow r fsname=data root_squash, allow rw fsname=data path=data"
    caps mon = "allow r fsname=data-ec62-vol, allow r fsname=data"
    caps osd = "allow rw tag cephfs data=data-ec62-vol, allow rw tag cephfs data=data"

And "ceph auth ls" show several remaining occurrences for the old data-ec62-vol setup:

mds.data-ec62-vol.whitaker02-ceph.jbqyuh
        key:  XXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
        caps: [mds] allow
        caps: [mon] profile mds
        caps: [osd] allow rw tag cephfs *=*
mds.data-ec62-vol.whitaker03-ceph.qtaqtn
        key:  XXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
        caps: [mds] allow
        caps: [mon] profile mds
        caps: [osd] allow rw tag cephfs *=*
client.whitaker
        key:  XXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
        caps: [mds] allow r fsname=data-ec62-vol, allow rw fsname=data-ec62-vol path=data, allow r fsname=data root_squash, allow rw fsname=data path=data
        caps: [mon] allow r fsname=data-ec62-vol, allow r fsname=data
        caps: [osd] allow rw tag cephfs data=data-ec62-vol, allow rw tag cephfs data=data

How can I build a correct setup for these keys ?

Thank

Patrick
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to